var/home/core/zuul-output/0000755000175000017500000000000015137132074014530 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137147335015502 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000352744515137147204020276 0ustar corecore|ikubelet.log_o[;r)Bp'o b-n(!9t%Cs7}g/غIs$r.k9GfD  >,EZ͖o_˖wKo///Oo}͛ji`/hxK}l11O6EYn*jn獼خx~̖K^_/_p/Jz\,W]EoO/(̗?<{JmArإJ}w;3Q+A`v cXk?`(6Uvb)EZW3)IۛdwHF6":LcdHMvL}.`|!>ڌj+ACZ%٘e1E^#QDuxGv4c$)9mX7̜4?qKfK, #tOI=2XBu!.Fx\00{D)O8_HB;Wy|ku%XE C 7g/7_E'c/z&BBbm1lχtO E)Pœ*fOԼf^`iۃ3)&cuQ§޹0"$5ڪ_c [Ad5A:yG @ɻ.g;;.J8yed3KdMvw×`21w^~yp3FCS[Yʧ?DFS aw߾)Vx\xAB8yh\dN9:bġ7 -i{sY:7q3Wc:yWO[l6ro%-9>| TXNrdTs>RDuaED,*듌D[5ol2v=OoVYT q3me |TRz+_Ex$L>Q4\:y| 6lsŠdQXVIx$DhC/츣s6%)CMitmy߀~s{ufJ+k5Q.ʣ0ħNbQe絸%]zdƭwq LJ;_f9ƭZre(&e'd,LFlPh ۬rw%wW-+b8_b|rt*Z*}QJzSXE0R̿hl%jA8NnoCi鍃p ,}Np :~3vH^-¸ǁh E P"P.B䤍q]G0,d#1}Uli}[H?)M"MYt.(UёFK BRJӨb~CףNjʵ:PW /uJڡcRixC U =T+}gr|CKrυ] g"O0T-:;c%6$V8nJ*UJ2dЁ1Je;s?YQgpa` Li𳧛ƾ9U ^};Էڲ7J9@ kV%g]cYhEE93 (ҚP>:D3T@ri1f5 JAZUFB1/ {f'}KB N8c3q6y{&G`J!Ip,2A *%KGj l  %*e5-wFpad0Q?zLV3\Lu3~Hx>,|:qIvBNɂHj 6&L`>D1ykd7"3F´[5mǭ*nt /ʿg?7-4mοnk'g'{97_e5{52{s4I#z-/AYT d'& 0(Nfm{azYsɔ+' [)9ǽҰ%>k,N&+1Id0[b>¾h[۷:>㧦lk˭c Ěϕρ_} Uwtm~ߛUIvl.hA,*aklVIkS7d'q N?sWt [hY9[XԦo Έ')Jir??Ӽ_/khSQƷL?lm$K/$s_. WM]̍"W%`lO2-"ew@E=A\b$BrW XWz<%fpG"m%6PGEH^*JL֗J)oEv[Ң߃x[䚒}0BOnYr猸p$nu?ݣ RF]NHw2k혿q}lrCy u)xF$Z83Ec罋}[εUX%}< ݻln"sv&{b%^AAoۺ(I#hKD:Bߩ#蘈f=9oN*.Ѓ M#JC1?tean`3-SHq$2[ĜSjXRx?}-m6Mw'yR3q㕐)HW'X1BEb $xd(21i)//_і/Cޮm0VKz>I; >d[5Z=4>5!!T@[4 1.x XF`,?Hh]b-#3J( &uz u8.00-(9ŽZcX Jٯ^蒋*k.\MA/Xp9VqNo}#ƓOފgv[r*hy| IϭR-$$m!-W'wTi:4F5^z3/[{1LK[2nM|[<\t=3^qOp4y}|B}yu}뚬"P.ԘBn방u<#< A Q(j%e1!gkqiP(-ʢ-b7$66|*f\#ߍp{8sx[o%}wS`ýͽ>^U_S1VF20:d T2$47mSl*#lzFP_3yb.63>NKnJۦ^4*rB쑓:5Ǧ٨C.1`mU]+y_:,eXX맻c5ޖSwe݊O4L)69 War)|VϟT;Cq%KK-*i ѩQٰ`DݎGu( 꿢\cXn }7Ҫa nG{Y bcWa?\34 P U!7 _* kTuwmUr%ԀjƮĀdU#^ۈӕ3ΊeBO`^}ܖj49lnAvoI "%\;OF& wctغBܮl##mϸ.6p5k0C5PdKB g:=G<$w 24 6e/!~߽f)Q UbshY5mseڠ5_mO0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą;^u'}8H0]+ES,n?UU{ x~ʓOy_>?/>l8MrHID2VSsMX^"NۯDc558c&'K0L /C5YDqNe~ض˸nErc֋@aw*r܀0 a {RQXV-/p:MP\<=<^越a/bz?ܓvjIg3MN4:]U]STa,@OKdٻҦ62L0ډ"ܺ_z9JNȯ=@oUI y4 A(" 뭗R==9!nKErHc1FYbQ F;v?ob-ڈFalG*rEX}HAP'Hҷ$qM9(AHx!AF 26qxCdP!NZgҽ9l*(H Žڒ;̼|%D Ɖ`Pj . ֈ,ixp`ttOKBDޙ''aLA2s0(G2E<I:xsB.ȼ*d42I:<ŋu#~us{dW<2~sQ37.&lOľu74c?MՏړ@ -N*CB=i3,qjGkUտu6k Cb8hs&sM@-=X(i7=@He%ISd$&iA|i MiʏݸT{r[j顒x.Ƞ"m@Hy_I )j|s#RGI!dTKL&4K>#stV \'xMgaSZNg8>e!^f%cYr]qs:"̊;isXa]d+"v=x7p.fZCg_Ys;pE&\U}ܫSh])qKYAـhhdEnU14&G * QIQs;rԩ.k83֖8Muqu_48dHܥlWW q>fu6+'}xu\Veelz`Zbym gp8펠ˋֆ:1IC8qٞ\vXçL ]X/r}7O}Wh,h ;RQ=]u00yiC۔I^3!?H6iUH:ô 4P$rT`%2Aq-֢׍qt=@x#~0)p# ы9'iri]ͪ/@繁qVGCڤr,DihB ,m 9 _$q3= A$IC"6g^4e`Xo(D*6"^eTh'4xpFڜe'fVQ7~'c L^ԯwIڣA.}H;Ë*׬=`^ 9]r鐃 -Dfi2|QwZk‹u^6DQ1&H凎c!n[mi3)WfsF:M"uҷs.1!뾧1%s,hQs|hx̗3%*v9(I;:'>uQ+v)vR/egBhAAdh]4H:nV$tHI98/)=mͭ ڐn}}~ק?g_6WĩDRc0]rY9'z .(jHI :{HG}HDN`h7@{jnE#[dz;n#y 9D*A$$"^)dVQ.(rO6ӟZw_Ȣaޒu'- ^_,G;U\cAAz7EtlLuoXuA}bT2H_*kIG?S(קjhg 5EF5uKkBYx-qCfqsn[?_r=V:х@mfVg,w}QJUtesYyt7Yr+"*DtO/o۷~|hw^5wE of7cꃱ.)7.u/}tPTGcE74a{N8;lr`$pZds=3jwlL Eڲ t|*n8[#yN SrA GYb8ZIaʼn8 #fg3i`F#5N 3q_M]j 8E!@1vցP7!|+R@;HspSI]ڻCZUcg5pDcIϹ,oN-_XI,3\j ]ٟ5~' SuipA!C厐$&k7dmhz/#"݃,YqCL$ڲ`"MUbeT>Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmZ?E*-8nhױ1xR2ϫCya` A y!?h!9yL%VLU2gr26A!4vbSG ]ꧧWp/ &ee *w$-`J\ ptǣC^p#_`{ КAy ?wW@!j-gw2ŝl1!iaI%~`{Tռl>~,?5D K\gd(ZH8@x~5w.4\h(`dc)}1Kqi4~'p!;_V>&M!s}FDͳ֧0O*Vr/tdQu!4YhdqT nXeb|Ivż7>! &ĊL:}3*8&6f5 %>~R݄}WgѨ@OĹCtWai4AY!XH _pw騋[b[%/d>. !Df~;)(Oy )r#.<]]i-*ػ-f24qlT1  jL>1qY|\䛧\|r>Ch}Ϊ=jnk?p ^C8"M#Eޑ-5@f,|Ά(Շ*(XCK*"pXR[كrq IH!6=Ocnи%G"|ڔ^kПy׏<:n:!d#[7>^.hd/}ӾP'k2MؤYy/{!ca /^wT j˚ب|MLE7Ee/I lu//j8MoGqdDt^_Y\-8!ד|$@D.ݮl`p48io^.š{_f>O)J=iwwӑ؇n-i3,1׿5'odۆ3(h>1UW蚍R$W>ue\0zE|!@E " ;9Ώf3kZc7B)!=?8[Y|-ɬeǪzd;-s~CM>e:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1Yhl~[mhm+F(g 6+YtHgd/}7m]Q!Mę5bR!JbV>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4Ăw4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vb;pV ^WO+į1tq61W vzZ U'=҅}rZ:T#\_:ď);KX!LHuQ (6c94Ce|u$4a?"1] `Wa+m𢛲`Rs _I@U8jxɕͽf3[Pg%,IR Ř`QbmүcH&CLlvLҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77eGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [&8NJP5H2Զj{RC>he:ա+e/.I0\lWoӊĭYcxN^SPiMrFI_"*l§,̀+ å} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v6A*:W?*nzfw*B#d[se$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (DtH-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*S k`u ygz[~S [j3+sE.,uDΡ1R:Vݐ/CBc˾] shGՙf 2+);W{@dlG)%عF&4D&u.Im9c$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVYSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28b$d:z4 .}gRcƈ^ʮC^0l[hl"য*6 ny!HQ=GOf"8vAq&*țTOWse~ (5TX%/8vS:w}[ą qf2Lυi lm/+QD4t.P*2V J`\g2%tJ4vX[7g"z{1|\*& >Vv:V^S7{{u%[^g=pn]Y#&ߓTί_z7e&ӃCx;xLh+NOEp";SB/eWٹ`64F 2AhF{Ɩ;>87DǍ-~e;\26Lة:*mUAN=VޮL> jwB}ѹ .MVfz0Ïd0l?7- }|>TT%9d-9UK=&l&~g&i"L{vrQۻou}q}hn+.{pWEqws]]|/ǫ\}/J.MLmc ԗWrU}/Ǜ+sYn[ﯾeywyY]]¨Kpx c./mo;ߟRy*4݀wm&8֨Or4 &+Bs=8'kP 3 |}44S8UXi;f;VE7e4AdX-fS烠1Uܦ$lznlq"җ^s RTn|RKm;ԻZ3)`S!9| ?}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4sopp|v1f2춓t$ė][oH+_%=8qdFݔ-?HJ-XN]_UWWץYhA'أ$i*γ=O6Ⱦ*L&9_H6E)}U|I-g}Յ{$ QJW#ˢ+~+mߔci쫢T2r Y$ZQ2hʪ^7kSY(R{KmȾ*LU8)x{o-Ž2WGM4ŧ4l[?'}OCBzszkel5+9\׸ ξFawG4dniֽ:Z]oRh!/ ɹo?>9u\M{lgv_W4WaYﵠ `7"0ÁM=b3]5_B!t_4y<_[w羲wXo?1uڮ~R+G_zc-=d>[^Յ"w*^^Ο'ySXjGĉX^s0)yOX epԊ%4T""IXŭ^&*|1bw୙bah;xZQ]TY'Fd#L[ZFh3 H Kgiَ0,=%FϢ=sL3 ܮst~<(UL8B*<$dV](xDeAһX繼 FoM@[§#M[q*5"Ag @l$dLZ?fq5*xSPrT&tQ=>U Q;:xt&hVErrKwEVWդ]%XKߒ`\,›GQbgjc񣥃&]R]'h-?+ ? 31Q .:g~uVh]`pA!8% b ~AWGMS2j2QIVy&#n#^yݮT3|>+foҭNQm3`k鬮>uМZ7|%GbBǧD.d364AIqoXZ>e]{d:oʄxewN8^Fo$Aq_wGB^*qrITSL Y~,d* 9ѭrG<ųpA?ioo{& a:!p5|p&al 'gsBfsf}Ħ % +ǧ)SX$P̓Xp s]~/إIű7.Ism{kE߿>kLPy֏zU>AB6a?Sz<E<1{#S7x=#nCRA(`lQ5/xзo2 cyO |jD+2P`H]?| 0ۭl{,Ai>9"a01gUkjzGBz9K@Fc h{񴢁;AKKԳ /I'qE6 ? w,bN>gO 眼xk.I[uwC42)) 7ЂPx:6l ~蔗9-ib!D}Y6E hXd]]7YƋjK~{c?Dt;4( yGؽ'{[=ݱ!!ԇLNqyQEA.#iqI+W5(DƲ_6xb˔ !o<:5 /űq"ODo/y).Ac )I7m,"ߟШ] )^e&kNQ$hp(@l}{|z(mrˈW3 VHޱ~{ˑq!0"<<HV mwRKR *IH  ngiT> xxYE((EνB|pEK4 ]!Xu`DbQյK IEjNv@ ^?<`^ѐfDCרJUio^҈+wuhEaEۣixL26 b><eN*L.Rw2)y&t: P~#d\2|FDPٶK@V15;iO16B=<^FVF؈Jߢ}ͫPq` _`4jL f˵ i*<4ѽqv5K=dq`ۏsm:DJj^g_ftXt i,<% Z7V9MShOeUV'h =f̊g!\uCmF3Lbă8E47"A$vήDv7TUDK#(̇`VNeEVk64!!T26 dL>e5Nj ~zHT'yO{t>UćgCe[Y7l9jQeruД:;j_ 6b  쑐oH_u&^ @)TczUAb$JPaNx6u!3M)Yw3SmvPy6#Zf"e*_ŵD^8|C &Sq}< U+,/R*À<*8s@scVMo%/4-46-IVi+C ojQ)62CQͅ8^`,o*y!e]:4 8ds(K W \K*lWQ?Zg⦆g04⮱qY<"A%I1Fa̯,XFjTS_2 >;lZUd{˼1*GKw rKUOCmOgZ=};p:y,YelNmMFPq6\u--UVX`|'T0s,5;G|mu[ E`?<,55׊g4m-G2(E·ô}f,agi˅;OsU]zU~#Gk,JjӬ$7:1$z/ TCD2S3@׼*nq!-ϒU%]Ut9!k \5 VL&҆w&le.FOU\RDbJӢf]ܐXSɩDޖ D3j5zzRWp}ڈnwqp*/,B9 -WI̾Oa, R_m4  HՃw/ 0 #S0bX "wa2I!w|t-r7q8} 5 a[nA[t!pUu-T44Ҫ ܷHWd c?7D=srSar.iG/B0g sY!7&͚]N,\ˑ)*a긞*sqkfؐ9dL~ݴ 8lT=_!:,{O-{iUY_}\M0. RUȓ|z=e:p+MHU㕣eZH^ϤQP̗ZK?r}7OsL$PLfXϚkݏLp- *cck O7O(>kZ74l5l}[k?pwR0}Z_'mi0 p BzabݵN;0 ʜ5:Q;7@0fevߣ4vi4ہ1}ّ .3: P~)"ZgsXRF =CO@ekwc`062QMu粣aB}^2.ۑy~IR ~ӀI{x;15)#Nn0pM5]k^GWRg&L8L|hvʹqVspk-bLe1#Abb&N*uO*-qMww&k7!2zBwZ+YG%#wFgK\Ku'L;u!r0 bvc6@v'vc.+α4|0p)}Yˆť@޷|4nD(p>ѸL߅|ޥiPXO3[(]/kԫKyln--?w;m ~gq]x{|VS>KAS($ݪ*=ř ֪Գ@}[tD~e}ԽHi?\u$T>}V뽌!CXr5yL[&@~.&1ܕ+ ?OѹOG<{`Lێ+`0a4TЩc4 2h[AA 4ϩdҷ#+-cl Qxx6@1-= ޅ x 8WfkPߏ < "D ob|ֺQ/l /Bq_r0vu_\M::TK>"Hl`qX"2%22{n9/ [@ $P7: nDl6u >pȣ9 #xqD( FPaBu]g-7K: #}U'(u(0"n[H1M#׍0Dbph#\փϭ" ; egp bHʄP,>@v#@ (P qG,Ar'u#YB񺒏 w aT}`A//^GmvAg0Lߑ ̢-o%o=j>mҮLP:r㶁sb{n#=&^r$3XP' a68XX܎/k ^PS wUGvMVS,ϭ;;(c(Eؑ N neq s ĩa]m@rDAG#rǀ ی ^gɛjiZ&i<(x#f(1ɤw4DN&v pYlWr9m`;;7 Zaqfjϒ΄3uK>`DзD}Pfn֗⇹_ Dw͙AӳeTym @10{;PxgBO>W.0 "z}/ܱiǢFב-PRH7:q<>]6u(^Jr$"e!wWym?rY;SDjqMM'pͲ,VAHdE`&]#3{dVh0g[j>jh]g0JqȀ|b9%8/ɸ,^p*p9~"I̳5x:\ma~,G6teԸWSi'XD`E?n XāpHMzBYf_y f663Nl7rYd6!VĥYN@Pe>}4;BJp!дKqLfu=Io.׃ 02FlcXn! )X7`m&s3+-UdV=H<El WfjUZdR\sVL<} Iz8CGg䚒RoOo1WϞO\S9v> JL%Y$:E:0=r4c4# q).XTX-Rs\n*)CEÇW nlK㸟0/s/e{}A:J~4Lusоcw>}ۖTM|We76OGtM0?N:wħ'ҙۡHkqɠȯy34/*&|A\9DE#+iX EU KVd_|{5|Zk<\$"RQ)HZڨ\[ .lTZ҅(ǟ&&p =-]1 +R9=j髹Wcs< taMeP~-Ey*:\ Grx]NPS궂mb"/Qt.xh|#ŸdӢEΖ0h&;pqsVOt+yl\DwhW9,|ח :ä#Ccy&?EP.bH2j b4ivp>(Nf#b?r.\# ª7SXBC>}czN#z;XBGs֏~:ԏpV R*CДOUa0ھ? '0lgnPG\+xK-2j 1/.*;f\ 0АIzp\cުe! 9u(zpEz$C}gD9;rVa>/^Sa%EbcicB..3fku}Bmx.'RHVɦl"f1N =|<;Ot ghV̰ډY@$ 4EE489Y#ߎF)~d-um\8ꢰQeS(F1XRX|U6gV^PZtEnf륁.`CfȇFi`zi##g( 8Wt E)@ӟ* 4D+>WuWV>dȉ#g}'pv6ھn%wh+8Hӆɯu6(448Ky˄Pxp8BN廪~ Fe^e|"Dw8e380ἂàl0ȟ͕Xrޅ38F+L{Nu/|"tB3m e 1'X@Dž?q*K 4[&h9Ռufnѯꯛ>_0*!I5S%5/0ߪZ}dYA-0`_Wщ+KdC?~pg<7X8KI5'A50TJHҎLw'6~U(@yU|wnw}ُ-7OI\m Msom3Z.yrr} 2aHe++Z$[_W:u}YgL2׃+ܳE*ެ͖fsX /-fSh/_eӬ'}Q>T F1ĝ ꪘ`Y-yRfja+3_3[mP>KQVwQʸ۸UY3D\kլWS 4j&vRB8i@{' zW|*X0b9 D>F!3@z$b|é=^[`=u0bJ%"Ղ``\ ؅ ao5k@=pt1 WDrt,9 VñvD F"tBwD~d/5tG"Ϗps0+}gۿ9ܝ\T^jwCM(vKE&W*[6_IbJLWʡ+)~ tЁ*.*>2_qY 7V ]^RsƵ=U+劷 lVs {_‡lI #dB2Td K[+QD B kǥn[_GMQ~fc`([A7 J0hUQ-B6m6h3SeIitZL; la`ee_}"mC'-йgTi6Z^y++ԍ}Q GLF֮NnWY0ߜ&zȈGa<Dwno4|NdJֈ,v $E@+%4vIZ;0j C[=}6f2J&ρ1M6lasߴ1hԝhQ-6=anHQL+rC0}?alȘ [+DʠiO_kM!LF& g &P/nr8_NIMV !o:%\VI^4eWI0n,sq˚eNYƀ}15Wy+Ǒzk)sEÿM[)Rx@Í7:v6ĩT8ECFbA#{'1kt0hұ4#BT 9?O |_A_;]cT6ɏl!~gi^00_#tESP{xqݸZ;\xYs1E-`~wTL T _TgΚx^5#%~[Cl"BoaǓkvX䓬zb G1d*e]P狆ͽ:&Ir`cc蕑\qR(wmm$W^f/p퍺A@įwmbVvֺ1/PDAVӛaz6'NhvGopNN6~; g[eR$tp%Q >݄ѵ&;$vab䵳ׇ9;=OfQq*fIkT< p8b$]hs.Ge⨕7+-,36 F;= X]j%S}i^:C$[Y98-29?3OR!H:__ WCq.7nXEEtK_(yyb$]{Nhcdv,g0=\!mqsI۳$ةKSs9%1+S+cts#ь=˩ iwDfThB;PZ1e8c41$LJ|,5exkƈaϚ-f}iȰݙX-OVw2oe-O軳FXZkLrSH!͊&16cDv1h0 ˆjA$B{a $痚GcnՐ~.:%f)],y\_>?;#uIg\"d 'JSuMV14qyIPh]̂-qeB6#3$ CY>8+yk4j,)wzYJ30Ek)es ĈNcsmF*'YR2vxO51X1WjV43$h~w|*(vQm#jmޤ@ݛWQVh d-T@ȗ4rHb CޯvG^рhF>e.%Bc"%+Ľ$I~H7I)~ߋ&хEe.*}M]A AuL=5R2;զDՎqڱn>B< q2WpR=ERb-# 1y&3gFcGtF,c:A6bc"Ec\p: Ył,c[ElAFҕ]>yCZBbB‹^u%}cbZ@>tr5$wcݥu>˽ě%.1 Uiͅn2$A S#Q[oqMɿw"Fɱ[\|pHol7n{8{MƅXܬV,jXVa6+s/;6Z>8hG8D^*lc$]G/M؃D1OHh7p83YY%cQbR16־hpvۭ}K@{9dGs,Y+)x--rI!Uh(XK&֤)k\Eb$j좚>& Ŀ W*8gA²P5Jn t6 dH6&?{~/ELӇzU r-J1!e9Q*r<#íWvVȹSg$oF1ѸҴXB?t=Æ$wejVM> .Paļ'>YLN_߰f'b75̓FV0Stz+EaQl-g-6(h 7!5eYDHN^L$/^jjjlx 1õ_q4p!$Q*`P\/=/Pk=4:WBrw,{ZRn"U.;sB‡μ_+ r[& i?YVka ]HhYH b&e`E1|;֫㊆D%xŴp uڍVe?9ŕx0!um&-ƒ3V($#p 8xA>x GF&;d_< J=y1^BEcL|VR&hV  "i'Ltm;V/qGC'@Kb+!c= [X;١˞9^ ٤2IW8>>8i愭348em=`M|\^o_֫I4"GӦ a$3S-Db!wZ:`z@Y*'f_%]&Av?EF/WEKKja̘(CI%Ϡ8Zg5=PIW+ ^rz'c#׸=%":%8Ŏl<>c.[o(Ӝ/S|@=hnD~S^Dr}/7d X%@yLk9.y\t=G36J8ma ^;cpEÔ"gks ؖA+ׯeC18^a@%5EwGmxq bVܐdw׷<$ݿ9ڍHC'%:8͒Z>݄!;0auy$b5fpܪӤx)<}6(ѐ_> nxbE%'Fa[s:^yE=<8d.eߺBރhfFC:$̜9u᥅sLSA[Y. :%)#P(l|QO1?6o8*s/8bmo d0v0 d*Rh2FI;mʹB&y $]ݐ_ң$q֝|{B·}$H808=!k}m-pB{ʙfCzdplp*OИrEƉeD\5@Rܦ=-vJh;N˛檧e'(WCQS.C=p|~-^w0Ag,}ZcvIm RU`nQu ȺCj ~U;FiV 60"3KX҂w*p+\E#i@;i5x詓*trWy#]VE g.Ba ˢk+ W{aM࡟Ħ/yɇ]jpŷa3KbV5= QjVKz\H~?RCNHowq^Ӈ&ݭݯ6_;Ѵs662U JH:VR)mlZlyuwަ~O7[c*8m!1!/.&`LA/Sz c%P^tOBKIkT< 1iGNjsqL `@[=cF|ŹWw`8:wg$`ڗ* U3d3b֖4"6 piݾtl¬^ZyF'WkpVWsger|%QEL찜0Djob|`8x)$pm̎=k%epI{?{/󞘼ZLI {R/E~l+ں>Wt4YG0_?lc|9UO%1?ܕfZ':# }Z t&/m[zϯ(^yF另08NWmWK rFhqH֌- pli<ó ϋhOioiD F ~5ӑFnau^_]V<`ӛ¹l{a٩m^] Z}FL1W뾰/hC|W}^LP}i/s,TqjlgMkeO GK̮MFSy &;^߳* M)(RjG`Yҙ:?}Ln| d)ᕹhi_$" x`0,˕y=}ah'EwoL%#0cohfݚn~cMN_3v{ NcNo+U iv,e;OE/wNTfύO~ @(5+O]3D{^g5q;hoN&qܻF๝gayV~]+㳼<vnי\p2/P%0OFTaD=#鶷}/1OeN\* b@`~'h0eC5n}`Ҿ%s1+ê[}D(0(t=EJh݄5A/f V/c>tGlfo:v_hN4bVJ[+}3č;#dc\p7FbیՐÅuĩ+Ls,*8L%/ wHmVx5 326R32e.fI,6& 'f*秧̮ "}: 3ne<;zY/nzwK|~?Oś5˚gIlP,Ecf\%5j9jЪB_$ {4&pY .Ӥg+),Gn]l aH0HFi _K`xs&5AhGKl9^NEBk Qי_'OnB2d^h4 +oh˶fŶ^|Fk0\@(U%kB{bƕDC&H0AZ4J& M ,#L 򩍘jT&IդA0nAh߁b >7JPJe TD7Ihya#d_̾W_6(kJn嘰FM:j\5HKp,D o ?ʊJJ#ڜRRLF8oL7gR9^zA?l_N}nI?$ߑ&'S?\߶ޛi氼mE{dMYHS_t $1R&D$QNhgULJőna$}u;Ըo1b0i}2[tF3`C0iZX#B$Q6;,Rq⸠qrk8(bCќ TclA9&j-MD0B KMb3&}ҦMϥwխ٤: R^t lvzk"P}#Ы^C}0;xd}1Xݪnj7e~O \?|a ٱbZnꉡGt+ P; ͛|J HuI^Wp ,_Q/Osc̻i6?,kZ`ՠY W7g?p Ÿmvt>|grsqp0Gj?q ji a/TiCXb|( 6~jpу2J 瘕@Wj1xO-_z+#FU?ǰX@Ѿ\crek|Ι, xV}Y>}gy~|g1C~;xԟv\썛2Nu1_+<D\*TW?'FE~=TG@N 'rZpDJz%EZZ3a4]7.m9{\L4{ʁF&eT2gF86)֩ m&m]~x5gՎuof h "_gv]ptʝ?i89^.Q%/YeQгlS0^eK< BLLblj9OKLO.e)dB¨gYՎ*ͲW V>В)X '92 ۔3-̂8DCKR@4F)`p a1Wac[KFՎ*IF.Hk#rE`g 3+!g1M`Psj+s$q-6Qm-U;$U;d@F(Daǚ nb1K[ %lg4NMdTdT|I2>>Ü"" C"Bè3q<|33/-2[ o{hKʹ;9ULAv?oOnbڟ<`AS%0m({H!O?EQ/w?g>\]u{R7r}> s`'fCo_@ۉO Ս0<?:\m3|kw7 Š7?iO( |ca "<.EB@BSB .J[40GF^~=j0I9vƍ:n|#5^ @ֹ>ːim?~[qBT,O (]l'FO ; ?@(]g0<ۑw7.#b;yu_W ,_w;#hQ2/Ѓu>D1ح's5U蹁 қl;w\aP۞AW * S"W))G/*C-FXqU?US2pjn7W T€,Hg!hǝ$3^)BV` =^'H,gv:TN)VCh'i}u}t׍Wpʂ}ȒB 6P^^,ɵi<,/Fa'+(:(P)06mTKN'.;S#l:ӄ+jq3DUY;N`}sm3_Yܰ>O-%Lƛb4T͂>W $wUE47lfՆJh2@nZȪz5dUE@7ajRc@X߆mૺ$P%O/V0ړ~=E]3AQp@FY;ַjMra+xӋn}PzI"$;%^ g~+z +JKwqTX"ӘKBHᑷ(Y\%=7&) rRYpdKœsk`PA!axL)M !ؽYM)*^Un`4r40_4oq=/Do̱rAwύHֿ@G&w"Q:Fî[hY!:T ߈o.oP00a 'TōaJ alޠ}p>(%I./6g1/-(ՉL4V.K)I@&ic#KTmWͨfMN_vrmUA\J:\L%SIVsNՎ[dZ_ " 1 DH:RX'V:UZ .#;#FG4K¸:I"9@[oK%`M)Kp3aB8fhHInFUl3M!XfIlG=9 Jl) rL'1ge5iIqF)QpD%Ƭ9JT4T 7V-F0?)c¸T9 R2_CCЖ $ 4Iu21<&$R"\۪2bUtۺR6-ڥnӂ6-ύ[ 3$XJ-|/Ssv,%on9h(%Ri]1b H_pK &jWFs?TQ2#Ϧ#X֨=^V F2 ~Y8a\DL5**%§c52(Mh,q"KAҒJ*@2 Q* !n ;qo6/7wh0g"4aGyCM| c"\cR1TH!XN*N$,f"U;-Ukb%֖j9r _IqdC2%GɆrC_]\vMeՄfo*-p -q2-Bs )KkCߤKcD|ɬEi"RM2/kqH_~?96c^XO`QAUzP3 [3"UuuwUu=0$LZc4J)X%&&V0EF&A6b&Ng)tGli .m Xl j;5Va* l611 cmSÅmENILcMyM+?-9UX٩υR`| k.皚FڰgG!\:Z(^?B}A"}+ۄI#cS"%$`Iqf@$,Il$@[%A`o0WZJ: vH&ɷ4Hv[D+ml$1E" ,d cd] Q"4BDbmڌ[O/?we#&SbcMRyabkI,mqX\2l7ۄQ9/)tJc DZSbpF&tL/{ŴD-&W6ϯnrXz"Q"GBhv$<.)D _+Qa;U g'l땨O7ST*ҡ폀*H :uB-NvJJ[cN8,&*UwX~AS 뙌Ɍ0cXQP<#:LJuILbڙFZ0p :\jR[pѵ %IRbfTj↬mzd nnZIع](\J? n Nl 1H"WcVX&m4x0QD,M02R$gԺh M`"2TF,V Z/NklKd|6rAM gem 5S;"K;"m B˹4$$ IЬ[ 14H"V^oHn)Kkp:UKi=)Qiܘ7Ⱦ;vnU*٫R!tFXwgkwFQrhkH5\/E=^5\Rk}'&)g˟gEs辖?+ޓt39zJeb0`X@ юW/{h6|1cc{ߕM>vbY>@}ln(ڨ$7bVzc /aVL. A ?ga1mW\F;cܕ]]B+Iwвz'%o, |ٍ]+5u;fR^=2)K=q)"X?5m(Z+Ti\TjnAgBԏ0p(eg\$!WC]$* 焫O7s7!Z8e|" D05B&R1Sj)h&HhdI-'ՎȨ$J2M#0;aA>`:m; -[@[/I[4(." \t6u2ˡP 0+As?qS 8ƣr<0 ޕb269@|ߍĚӊ>C(23gFs_k ߽*2sC<"e 3Yƙ}dH9oۂ҈]`NIz{;Fuxǿ oF %H羡^b1 rz۝5J/P"BՎ ۃ(ʗzPP|>cCc 轾tsn+ 8#pŻ_d^9k/\r9@R)i`/prDs); 4?٦`H~T(;$8cvh)z m#DRۇgpDCdICtަt(&\v<%5;Qt+.*lȾtNQN5R q$5=3)}JxoYN3@H E0وA@Hq0cOnd^g@m~pIvZr_8.ݞjz)צInUxf?޿cyTԇ+NL?zFW}L722{?W~q}n(e#exd ࢗ߂rx=&:[UOwRxtc^]0yWnɘ8'm׭*_XpM)8sRЊy6| Ay6F ~]y=* 1;JjZDQ%\ڲ*YYR$< aSdIh,pGB~pikHRH/VK"Z՞bww %>qp g;4: k3FlSb> o2a᪇iR-^fK~ΖnuT caH>?tZ02{ts+Q +P`-lt1L1DŒeڅ*vtp !VԾL55L/Vk.5Sj>)i'`v\nnZk]{ NSe$gl\ UNqSfr*Ck!" QUZ9Sq"j3SgYSXM/ꔚh[kr*'\nSB1D3LPԿg@(*>f|Wey{Y6E7@-i&&Bc0c+yi< KC|UΈ x,a uv|qՍ4D%eKh;!Le)IZ T>'Ց$b6Z ݵ6.xG?UE5^Dh ݸG =DH׌'ytA-kBݘQZUR}>_`hVcKAi4_\eɚRL_<⬃ ,ʯܼ(sfi` /8ʏ9ǮR;}ЋIݝ".o3%Ѳ}th«+2;0*QPZLht>H0B쏗#\u ל\IET^:4]ʾ-5kEPj]+:,&ibY̔$` MU5KՀ?D,htn_~LBs^f]N|>JZχeXl:NaS" Fӹa4)b zBbT~9E[u ;1O[\/ư IO;!8w\6+ EKtMBc?:4!8`~"hA4o;7[Og7p3sg1s._oxTqnFub2 g&O}dh&ђy8ɿySJK|4.(Y& hbJ)vKQz78 } `d{o&HOL4!DW Ҁ z"tG>T_tZqDU|V}ޔ\46(_"&NW3l/O7?-}‹\#E * 's ?<.:O'Y #ZFrqKBF ,L"߃FK~ٸGhBú)7wywy5VFcLOQH)CiȤ!AZGIcrTbzDdy%Yd_ZxCY0汆8a$V5_Q9<3ޚf(Hq>$j@ǚ?)PM3Nq~;,,5tdEtJ݂c$ g|ހ <(UtM \ֿjB^NFOyS^4IOfF%F'A_.+'BMO9WaorǍjNR\J6C^NrGIXzT_ :GmNylz[_5He9&޻.@~ϩ'3 >''XIV66ߕmcV 6Oc`8bn!ȏ9>w$ L8V+r~3οNE;o**<'n_>_rdPy|9ڱ_lʹ,){R9ޤu~RGy|dVҰg~ZY(n]O0/4fؽaK\619݂x# K=WG]%Ol.2$S'3)0XTOwF]y4H%,V:H;KsWy`",׺$- @bJ*egβr=8Bdن2ȹJEb:]޸7;?n|Qdbd?l8*d6aLIe5/n{͛7Gn@<3Lg6)ʛ>wov)egyTʮKwK}?nmݲr w4Y4ɋ_\kV%(zw;^ͥy^&A,&ŁR6ʲOoo;X:Dߍ.CK+]򗲃 J-|>0A7,7BW~.w~^?ĒT$0iSdq)L^8%G$JQe [wmS}x`?$Y,<9xՑ53oqn깴TV8 {D&_9EGq>|~Zԫ-]6&̺.Vߟr'#=:#{?g~ w-M3ü&7q58ot\-|`A63uf>7twCO?_'Ԍ:lwj`n9QHޱ5_:/+ 7rڗiÖ\wXh' &"Y| Ƈs[~^֓햰}X=dͭU}3Em|Wo*߷YuχOpZ=Zpv:f]Gsn_ 2Y-aG~ YeT'#Jى_HQ+ jS_];h:SLX G}+Sl@vx䓑k(\92Nznօ$f~j<_ORVh^uLJTnuaq|.]o(*i-swLqֻUT)H&B>Ǥ̌O&Jbw*hL8Ϣ?ӷ#D+9/{fpPosXw?ԦVvey%5SZ\̪XX=,f`J.V2T Pbncζm]Wnw/Zg'A^ۿ ZwN>j7-))hM֦"VT\dK䤙1%}U&k폲9.?Q}Cu6\qH^nI7D0 X0i%@zAс|,r::bz9 DeI!$(@R`ަ-KՁ|,ZN 8w> Y.+K q8.''ҟ`~̘ca>I}ZE~W1x眉(}e` sJ ќڅ1vcdt 5 |znDfC^H#'[8ì a9̸H;1B:㚏D\l(ݡܖv_43;" jSf3!8ޓBa gO PE/>9Y~2=6 .}-|| 8=e:*ڏ h/D&y;1":h㚏E[щ7^7\sG[və }G|u\-6WfG;kf,]EJ˃}eZ:"1{˃$dSW A,*bvXt:Y? E¸zhٳ䬞6ɗ0v}X]9[~Ow232e3V68Yޜ1vcdt 5 9sSZ;wal_\K=]!9},I̸4ų ZWzr_dq$t<5w!Z:Ӈ Ɂa> sLgAH֖72l ac:L OUId}'<6:Q㗛G7"'$$"RC<3>>!j>r1V{C G8.',6886 + -iIBTIPR,4`avw!F5 tǛzNR}>yAE+ܨV.s帽\RO3F9w(4T~oZN0L7 xWj0$zc;~R ө.Pǂ*Z2P(ʬu'&L8 %zFI4H,-PL$ODn'X\j(? GU-cL"2«[A |6tێgx]j>rc[:GEl :.W˪e5FtT TsW{ؓ .@5 dlpzAcc4\.IvO,UwRHDd09Kʜ`GA5E$9T-:B`QBf@QBԌ )oװr[N3vª~gwW~D@g7 +y"hx|jiM+uzہ|$vBҙqv=`DGgXnZ -9 k*|S`!dg ^a@&gcE3erG%0*=c{O ?ꢜnt-<{h 3 ?d׷tt/[HRFg[ߒ5K?=ڋLt/XÃK7_ (ӈI$ݬ"\^n#Ңn|ð}P uakp"R4߭%i/s&NKH+?աZv0/{ږo|{*S]5azÍ\.SkY `j)2Lr8w79|,IFY[2Ӗ~$v) k0QϘ-*w_nWȫ`m1΋pgXVa?.N1!QGBatlHkOji>zV.~EfaZzwW-@?~AH]4akLKg%HdYM:.u׏zv]Of*_)j/xpr30 qibi:ccQ8 nlkc"V7G[~^fOl4ST&ΐ.}w*Ojd,)%"V7<_shT%()G ¯jend`&zR9P:]$G5Nr)D2Yr3]3D~p33KYPY:qҙ鑭_dR[r,Ǿ`nk"drJE 2EZI9 $P 镦OaRhNH[jP^r ?k-V߿//tVH*7*Q/l]%WѰep @_{yJT #?|~8Y˧ټ! 37[>%4>s.>1@4}}e;kW %6Mێ!dS.%R%譵?rOZ9!xLC CV3IȞ<{椹a+B*wĘ^a=C9~yvJ"a4{\.V=sFrW,mPHfMWYM9(qUxA-\&Ƥ.̆-ұ~WEI)GQ!VL/T{0Tuz]x:e@Q׏5;zA~91akB_PxLz bpQdcn4"nY4[Vΰ?x'_g;_+Nnҍ|5&an J!vX>;$EZ $YL+H[XN,8AgXLPQm9؂y/&&uC@ćvZҽaQïח PHڬfᖨXٹuxNEZ۹i iӜc*qRȑX`H BQ?Ct"V7Pc>W*9EI#qje1VuUe}]*lrPVב #4&ݦs0w| To# \l]6nAmуGD<0s׆|Q]4=`6BhC(v QpX> |{n2Bðm3#f>x-1ZW>@y SK$e=?/w[Co!z5l8?}1`D~-b(4Ae|~; 0 _ B E8\/DևFBרr@ dڃO0)u&@w< ¹v`æ"Fzugѡم n!ăҹߛ q>ݎ87Hwz_ bQA!.(G$Bt[^s8tLߣt(xثx8Y's+Z4%ILLEykK]&2/t_nFdtA^; F^C-cy2kNSQR#ҒAePmTF9Z>h3Q O.|SG\@5mBFos$ 4a@93Yc8tbW/,b ]ҖDܗacSEMYF}ۡT`E.bLc S3e20eZ>%j>}"b٫ O%p/=Zw?7bufgTO!I,FTu朤s/@7W-K䳓:T0Q 8D6)d^78(y㤙5/fcV3Q2joi2"ZDD%OxV<:Yp-zBy@phh C<2::@%Wvj^PoC "nf# A/*\3wY#<h!]m@LG%QD'<Bh=O#yfnyyn]Fn7TxҽBaL4ML,]MYI}c3%=PNQ=PP+j-^{(ACPM"Isz@5*诔Bt3zzA~#/Ɓ uv}48Qs;إ.@/i`5Msl~ Y e })_4|OZ "0]:{F4@ԐlG1nK"x̓BE]'!`jYY׸*t-Vg plxG+0-m26Y;d3̳BF '+x!"cPC!5:{ͦi5iHWď_1_Dz<& f G L y?B&F4FAoK]Gho#032bH;xL:?Q9/EHQÆ3Ć:{ N4FAT{4ZlI^&y2wO5oHC.E1r2 eG߉W8 uibJuer%pT3'L׋2tɲocBdKWU byRev}Plp2y__=`-f_r#*of%"-߿u[泟O7_+7$2LoiNGɤT# h?j}KM睤K}!롾փ,V5 1ʉ۞'蔻q8'FEN~TEz/ʊyL'u@uw>TH.A%y;@Ұ/C[];qH*vɷ\w<g>_fű3o$M>\6TVOnÂDB@0e$ r4ctmfoaCuiE!DQ.zBhthh;zz@4z9FDܙ.b^{Ä2Cˍ!F8Nq =3KLV|Y)x&yM^|(8 @K]Wa`"I0U-Cys|W7z^,:X@8T PjGH@}U%ex)g $Y8n4ZW}KJDl>ɝ$zU-,ПSRe{Zsk ^ I^E8 paY[. 1|0>`g'i8rVszǓ>)^C ƳN]ܶ>۸Φ^Z:%ҕR\ &^Ӎ-5b}@F1c.; yNy#\bB?&a66w&hR!#FY|;/"TϫtFoepc!r<V v16|5q3|p\}\ FvIo۩-?UR`¨6\HwU#P߽@_/]OaJvJVϓxo{c |5fhןg?%#L:d-C }$d&_O&n~^ifPϫKNd*Ǻo}7w"-ѽFhߥ8 xԿQ\=Ϛ5p%5ϊ}hiX"`&  'E>ΖB~n-Ӻ(<.hCM𽨘7 2 bivYn߯"q,m>nD x g^BL-[d?њR70K#h2K]K峭G!1~4zk Ro|Ne}˛O?Nٟj[YW"|{}V |#i XAvn/.f0rhvfmi~OM'-aNSuqQtڶ)xjQcx)M (<54X)S;IV֟y9Fv2⏺Nܣ߯MkoltZeć5ͼpsS=Fqvgzt|W:-O75~>z !I_@#[%ywkdƺp+,%%镣DP r4pt njcj|0]+'\ܘ|W^oT [ItRi1Pf! %." Ԅ8c,܅>qw &KeS-ㄗ[:E*VgztK-*I`m\CV/lP?d} Zj2c*+bkp+7p,x "6;_)"R{PZ8=!$-HӅ ~K6r>cn,Yq'̓8?>w٢,y>u8]:۸g$0/U|I5Pܭ\Ud]X% I%:ZGXX.lh5QhamW%|p 0VXm\CX:*y {'jNb2$,U{*B#84e,_;Y_/v bvd߽hC9-M.e8Waa>2,Wufroxm'ٶ8]fdQ^OOu=]](6ԌըWh:KQ.+ޏX~gJh.2޽k[Vk.t7'( |[t7J6 8[|=dgzQ.W!G$_ssL3)3F] QzE|P2R=S>]F]-ʠ m u۸BŴ(MƢ *SG 禀ƤG) =J]UohЫ].vW,2r+.KA"N| =^Fk8ێFrWᨦ e+*ŊbSVAŹQmUiQpW7hЫ^|^`,Gm\e}*ȁ'Hqע2e4^|:YtLìf `Rd9c(TҖ~=mm &٠ϧ2[KRW`Kp<<Ͻ*g4p5I hFυK # 2'; K\0 m\! +M-hQ{@Ns4cϨvFKL w!6+,uڭze&6fvWMNK3F)3.&ݚ6Q15^VOWL5WtvAT`qt6 2&WtT`)h"]Y+/s5u|ʛn !] >%^ eW]$Pq/pvwWd,F;v-_ Ruƶ^>.wܩE5) kʼn5"epu]AWukƆWhbE>uSt\zگ?(D+;Zbrvr(C#hN="6ONy @c~15 )}(ǠgPD=_cݵӛ袴oSqUdvꉭO@6I@S/uYWZNz!W|!J`wO0+?|t:/^ۮ~+> 3kͫh͛bxV{w2v,*އQf#m`P aj@˾%)H(z7d)+"1VQ8M'"*.`(.Vq*e<ⶩG` QqeYjzaO[O2#JΏ!l(P8dgr9K`{0qkUI2ZSP _ i*qlAb8 JQ:K@ ofϴ-O!᣼;lOm_cbayёzfLWf ť8^5rN?onȠ6CsF/x3v-%ս +A_ۛm'0(a,F *+9cSmlȗ`J'uxRY\2Eưw֜ΡtE@|\*[mfʢ8V?(bˈTchͦS")cjG;|rl*{ISD j(" G&+Ȟ&(& xLrN !!u $ShlzB:`%OeH#Zb8!Hɠ'ö{r ǻ@p KApVF+`*1FJ!%ͬiBM?SĀ-5P0oM9ʂZyCzL&.hnT@i_\o5J>0Aw3?UWt.P2浻m9P;0 }m>j.)DT.cj!K#=9cX0zAyf4ʼIJ9IqQvu%/(OԣDWZ-eᣒjWP:\iLsIj>D~N(@~(-RFjRB]ԠUmӠV|xyAp KNʼnS;yAIg6FOBhR4p"%ʜ'O SY)흺UJќL-[^%h]a 00mD~wgqG ̩U1)-6%U% V@syc%7^)s2_[x7$8e΃6,0VBWtj v{W#(@+1>M Ԧ Ϋ]710q$np_T^}3!v2}+^b*q41F҃#WҐ(%+떽s"8CY%ޟӦy?c`˻pwMM"d10qh{A^ zBdI+/."OKh" @ V錗yMWNKI8|3c:xM^x\{;-3*gPa̞Gea]^>'[}B;+IƀGk&K/S| Y(Ae8_bubxW˧dQd!.PmYz-gغe@؂W^YeIJUQrW)PUN08y3*u~j):c` dHˢiuwU;,&'X=IgQKby7@ Ԫ*\P xǪK80HITA0JjoO8@%p L daHmqg`@&q|3Oe"`EFVV{)5LcHH>qC؏OvPۋI.rNRۆ놪ap U5IagTC|jE,H#8 JLEx=>nz8U߯|> %Vx+ W$*SƼq'iE~sK|z5o3n%W)fZSכi)y?G4Y}c1>ĭN*Dg:Qo;.b𭹈vГzU9ZG j57fn:iB E r6}k^CZjɼ7 0م4Nmc'e&Nf1ají <ż5 .?E3CT  1x6> _D$\,Q'mBM~LBUm:[]ȎuKټK`.wEPyb+;AJvڔɻq#s>e2ߨ[|/vm}>dpƱ3sIGV1aƷ^C<1spZ DƊ%JeR}TRjtTiXo7Ϲ +"&'7i6*tŜ8Mybjrzu}f&EOG˶F3g:^*8*v"qETf0WFHб=8:~凥~5i{.`/5-LHdMaJEnTr+j(1MUaf»uY\ivԗק׺M#v^vǶuSl{\,U9 NުFJn$ 0Zw[`/P H[f7KESq"s1 -,N PfL˦ˇ`Fob*^M*‡χrr3<i8:fpcVzJH*#aݯ61C@E$,}$e6JyOfS&NLq')&B6W Ke&u "k{) aʒ=֚z[ [AI~@431Lhx1]N q;`c2Mxږ"$;ͦ,6$űɘ)֖pG:cA_w:#\\esL P{$`R$Ĝ*Ua^jt}y|C4αq9fQPnU@z BFb$onP)szg?t!~wܚg!8j UL`\1ˇT]{.xFoʇۖ `V8ǂ)KYs4+v2wFgɌXu^9a{C;d+>)uR]AE׻vfw C?s:?cL1Th 1IeZkF?q)Sd-P-eڰ,5:LDC&S,wfl!ñ a&3t/ljlHh%Hc*7.;skђdDeW UY Ę/I0 |Y,La.C .60rF=NXljS(Fw{C^JlķTݻ+ S 8|շWhGWkjԗņVż,y7Q]E6<.}uNZ/AGs<;~ĺJ!?8 %2vn7X~Hgc`MܻXh}JtFUz 7@h =n{u:ZAN}+$Vi m*ܯq LݿbM8q2$.aWB,Ka7 0n1,qXaN e10q钳,L,C}MӠw IA>*XAѮ1w_ֿ>&L#( ځc`׏ZA)I8&v[qȎH,0abD H3'@}o܂<o8/wL鐶8%Gb/\>rb §aVrO'I gTfLA2GcqR.`oC00Ys?s{qrDubg(%dI*TX8_qK4 Lqt9SrwcCMuۇo`VC Y}*Ԅ&zqG_髫Įc9 }i[ Gh6c s/Ͳk3]ãtbV'E8f]!q|B̋joKr7 gFQ}C 5Que>s㹛\]l<]k[CcB)7)Yq^g e1kT$B"}|z:e~nԅ/h7Gv}jn6M2_j:me[ՔVS[hK\4Z"6zuU\#%^z6X~r:_zLl.=}%vpx vzΦ|_ʗ}co=<_=۔i`l?vCY_Yo'йût[E`33?@0٫M1L !ѣKn;@/ZMҋz@"{kKx(WwDt[%&_|%? X =;*Et؎/g2اaׄm8viT$o5ztW\4PW%ƪt厞[GOWO օš*6jcv!;sL`NfP3QkuS:z ٟ9BYDK_*2ʋF^UOzq_kv]Unaro@xAelnrwHNڶ9L 82&>BC頉)9 MfS˜y)Rp!T½Ra:tڧDmєm$sx6D,e(aCȽnZu8ێY^?̾+߂Ue77]?sѻ/KOIBKB\lt>.ⲙS.Y=oo2j+pjɟ2=:r}~|=N(&&S[$p S G/V:-*1¤߯ES8ĩO^^N]>[͐ el.Lj%O mĠ] Ise#v@8"؃$ tq#ftŲmMul%3 ֎, 6phvqHK8"+>]FŖwr>R(s˦/^*%)WW.N0)TUJ4`"YeM6>2J}5vUCQ}D֧-^ʚq޵5q\鿂K6UR/~pd:UIv־8 $W CL3E3ss|pM~Wq|nd~O{"3Eַ-I+n<͔yd] >$y{BG-? ]t}:ɦq5f=:e+󐐼BΫѷ`)6ev5p0ӣ㣛|]Z/ Hbk4M?52vXmZvdoEhq(J㔮627g_ 8eԬI"p6|_W/GT)~%ۨ"[ս=#pcyiP8Ye}+Iп'֋XTlZ#?vˇӏ/?BzᗟIqG`p@?ii%4/ po.[q]鶺u}_Km|{?璬'Ґe]ΰxl8WM.x `ā ~**1?jC?6 ;cgTC x(E'64( X$,w:7c2ފ vm~4o?O }K~K|Kv|Nq٬OL %I]L)RϽ7>8`tR c {[_J NpKj2'oo~zo].VmӬnٓ Wxv3mN®Hb02(Y/"^U҉DB`#%=K?Jbrk|2AQ1!MDZ;AÄ2`1 M ?:˯^)8 Q;iS436`lB.SoNjD DG4y=-4nj3t:/aP:EwmٓʐZlu=]fY`ygF܂D'q!^ml/UULبEyN-4.MS)C6`+3C=c|^ X=cC!C.Y߁!Z:o\a^VF^Ձ~4ddBj\1e8Ti̊Pە/| ~AFDW/Uu;0(,_ TڕŀZq̈́8UEKeK !DŽM6/ 浪VUC>^ .`?+rtv`ֵ5Lf{fݲJslRږgTDr=|c/|qCq-E.z!bշpZè9y=DoAkg LHmFe˲ov:͖ʥlզrftE4ࢩ-4^儌3VYȖep AVU1޻S #L.٦U|( %j%5Y.E_ZLU'tm% K96X%n#U|mJu.y+hxr[qfCoTbTnl? nu\Ęu Xqrڗ,xѧIG ^¼~A ID!gE8W'=v-]qP@|2Eemx:iԣW7C\<ឺ+ks終x5_}ˌNP* 1Ւ FsDcVh6eI sܳ~1d@| 2@| K(`! 'F P hNg.ixvB$$Q#¹!Hs{E"ҙ 5z: /V𦏂=%X)`ݤT}}O6 6ovrvvkб I؀V"M3b\ L+61Wkx:3N~q)bdAJҡ2u==+Cb3ڭ*F vFN+0a VcBέa<]p v qA !Iu2ʌ=tc-c aBI QY˫z"CIos^PZ (F8~ԁfdS^Z19R[@Ey, I|*-y-4*ӃB߷ծqZOj"xM3ޯYxT]x>=FCg#XM ΫpnG;E<ȝ̀f\XHbNBΊ0tp^f'8iEG[G{Xߔ h`Ku<2R%zpkQ+2zyjC2ZB=C)'ԚGDw9I2kGQ0x8UT&zaF-4*g>_. !DO`Jy/cpWK`b,@rkzA[6wGF*c!2r6 rȟ>~]\Hj)r)-i .8V">1'$8lիZ 2Zٿ 5€ARyY*ECB22T?D%1Ɓo}ӍY3E%=ө#>=Ԫ}:6pB&qNȩ5 bi1ܡhkQbWcyO% (+,;TuHkQazKhݧ$ ]2.YYIYqkA<`x NޜW*|:9i9b1"f%(g>צF~}Z2}ZwC /rm' kw̭S4qr){ݶ-(.PJ~(f6ޢIQ61O[YREى{3,ʲEǖC-r8gfxsJi`,|P TKԈg[ǏZ]<mKTP"ccp)?#> >Aj}qsH_^rƋTZ!CK32D*%9L3V#x3 4yBԕ0$,U:?WX ~'f.˝X0_)np0K&ҹVpb#VMz^E}? Ys4<-W5l5%`ͻ^G35-*(x3pgMMn lzv+s8匷NhhٻwU7#0ѻ+]\ayޫt L0o[6>fhI{#b.Oіl|@O;_6ZKumYMи+sĊLr)K5~.NLLryå O=cF@1gP[4|n9ÊA=̬Z4U+LelѫYyyt5n}.B\)jxpEe3hGSdVnliX\R;_}n~}??宍L 1B0h~h\GTTͪ4pcr>J)՜:&^ )aIN.4uL?GjbjZ /A:c20sT}*0t?_АNT 8u./5T%lo0])odYY";4l.c[o[ -I,8~ 9M~Chh0[ԑ$[ozuJ4Dhb6?F=xJh7c-(6S<sA$?l\FC}q"g4` ( -%& }gܾ vjO.ڴmLY,>{&CD{NERR%nef 0hwDðq3e4ev?U<75[;MdNFKAE[r?(x3/XX 7U|e~ӑK! 9rTrgRۉOR*ZrZ^m~nI0t=%`{t: tlX a=ﲓ#7Ne R8SBôќP95 nn$1 b]q}?Kl۱ ]Z Lq-m&OgLG eZ30k1hFs+%*_ *տS.)-vujsխtj7>&=,Tx j;m80$5b <*8ҒPNbXXH6Q56Mȹyi=Om-b;zJX Hv7jK ^Ȍ ^`A 'H @pXG4 inX:hјV!!(3uS/N Ij$f.iuQcRQS#E Pz4g|Kڮh^{xmc]QWk; }v'qBc1a~%ì^1^~-]F_R,;ߛwঃq.)L"iB*Dnɹ'^%NG p:AX+1dټ_MLᯓ3`"tU^!omvxY ZvJ\.L_s\ `g@K3#m2hnjU1V'U{|ݸaEp0}GD2x彚 )+O;$R I6T`8PJAIi4DDMFF00!S&aR;f2Rʃp-S"/5^#lT

hڇ<(kLi#o5!ST2h85^9Y%n;F k% AځJYtr% B5"ɥ6ZvpHv2YEJpaK_V4H.H>MVa̋yr%ā+ Wͭnn7ٸLځiЅSڅׯJO͆%X KB\7Ce_S.d i^s<0MV(y6uiVI`7'WN`+,L^0gnbƹ!5J z~trvN@>TOF?Yrnq-\ǿӲn &iUbs1(bH#aG.IozRP뙢vx+f .{ao%P0 xGtbFs&xe.DVapwo`!lJgOl>]MJPS)\&RT-[eONvblX$\ߣ[tG?OMad XJV~Am<{^tv)\WxΖeYi Ԍ"LusgD8Jd*P'x _j>{UNbɭSL[24],➒ט|,91%>73{ש':&%ə$QI/SL]Lr&\遦J󲊕< %0P4oUͳk3.Ku+P >ۃX^OQ/z@OWw0K [P$M!1e ԑ-qɣ8f r0kA7B%h7$6UbTeS/VZFओ05l$ZhJN1;:M{V-4 &_۳_޼o0Qg_u?[&5A<_9d n-@SMQ/>piSn{|HM$~]$ JO>KA7'/LԜA=$v &_<2,lPdtrTD U~"ОpW7iT(PJ V(IATf\mg`yiwiQ]8c 9ûRQ ς6.)r ~zϴ.4Ѐ"̮&okvzk8f5+VO%Yy`;$@`|1?P^m${Xy\Rc,ƤSL a zx6!̋rsgY*7w͝C=7d@lB ǒ#niI[NL<"Ǎ /p^"w a>ñC3Vyر5r͟\g)Y=|;\!`DzЙ%ĪԐam4%ҥ.XX:;w-]5s_Ӂ<Ӂ~`K:2o"&~oc&ǐ:nӍ"6UՑaY h-Ehu/]vCCo"ڑnIg\w ѝ?|⾴߶Ά۪I$h?sSaSTӻdAښOgTe?Ki[[diH+Y*`ϭ}.Ǔ$Yg#jV7:oG(fB &AX33%Ynt9+U6t`}p+%$Oqd:t`YNo$X`T9F$㑅H>H=0xF0+C e[+Z5ΆE2\ߞr9|mߥ停MTKI痟ịUht<oeF͝QFGRM9P+ن#Gx'f4~bzy:pE6Og>Yo"}, ȗg뽸E,uԹ3eАEàHHBJyXGԇK١ϑ`jn%`"{&CD{fhp %J7bi1g;JaظVd@Bp7 FabJ҂I<"a>|Q,ew2;sò(OLXQ|fؙ- SJ+;I5{Զ恽{0=ْm%7ȋf{N\nqmt]6[<#)*察U%W#jIݽCQqGmo 4oCWQ^ jXyΠߏx=BUոNqhnvZ9{NRMXAtC-dhw }=;ʑiQExے@Ohf- g:{CL޷#\*m^ȹehrgHo* \rmP&qx.eħIL!6`k U cJ#68R$#Z{1&1V.'݁9ne Ɵ9uS*icggj8]oGW1#}H:y')"ieKYsP$Jc@iQS:r5$_.#% z)N;TŜsV? k⺀iǤ(ڜ-'Dci -1uno똺$o2ɭSb򉙆lwƅ*N`ba':Zw_`+5עwvm]qqM -CJ_tw g{x+<׹nC-E,io8}R>/uͯ2y߂}ț dB$>`f7~v`> ` -%& ?}o>?gA#cvtc0Z7 G Vp).}Ԗx93Y{.ʡ;dSG̮[9Rppݻįf8Y{U42aD%} yI"R:"=` ^># \HLe@YGa9AQN!{i:)vX/cHƴ (QHLd$&A:łKsg Hu!c-K!%%B%8<`m5Ss{Û9L;#2sϙ6\HimWW}@g4~ri1kraKtuڣG{(sjHi͟-)7c-1%Ǝ !B*x!Ն  JpH2M!"jVӃåKSM6MvlXLTL|<4.n\f#Q >jȶQ*1d>j40xRW_1K֯ ƕ`e+;ŘnTO^u6U37;巟?Ïޞo?}uw~| Z/@YpiԽ{M܇kCϻ_04jh*{ ϸ%.>dA%uk)@~~~;? n5M%Эx@WڮG>͟A/Kk9YsVgEܥ/@۹,kP6ViM݌n^l N3);MalwP ~62 P+à uHsIM^OMqx<f_Qg@p4  B]Ji#4*=5!u}:y)>'pkE 6ciTXp܂(r ƕ 1-gwBuj+ym[w='F"k ;! ϯO:鼎l+l5.E,:Gm}8p(91_!`x%l8Ey sZ d 5(V8d-Y `1uFH<7maR}u?*^JX/'7"q3xna=ƘPĨ]}Pʧ;5R=0 ^bֈ}fW[y]z2;X:L[^=یg9;V|^W|uq6c\Δd!$:V>;K ,e|G Ӂ+x:;zfwzώȽ:b>Á^ im '"v^pcxXGťЧ(0 {Y/7[fTֻ{fbGm҆F {s`7f `X`}8+J|.0e{TOB<|FrlVY dJc1F1sɭ`)$ŤYVxucC!fH ^ wE%| *'׻grgwB)^]>:k9xK25|B-wѝiio'1fF:@fP8r֪ ga_{M( OIQN)93Z`1OdӞL[b_?hn]=!n3yQ4Nu7篅C v +U@ lPJ1E@81p)3!7QߠW`7%l'Xlkn3ʬЭRSB^yM̰7sJ 2m"PP6x2D`QĂ%ZVMVV`n7&!omvk>tfY9fOZa2cv|Kb4 6)s8HwB-Xϲڹxl&#QHYu2LwZ D佖豉hjBEǞOp4ޔeK[pVAfMnvVy-9KpGe6Z:,w"-L@jwt]kOfeuF +]֩UW򉙆lwt҅*N`bۣ:Zwz_4l+5ׂ^wvm]qM -Cz_twr{x;)n#K|`a[ YY~tߺF^w6qq}x9*dBHuQr1t2"G{ɱ6$Qs-9˝R 37K0%8.ׇBzQ44ZK KԢJ{ijʆT04h >s-c13!κS<`s-'1beP=yVQ4u4,` IUp;oB!1qV{tHbaJ{M:=@cܮ Wm:X'\eyl l #HL#`mGV2=u XW>OO~,ddD.\"ͼ8g֨\AA!+WPeQpkFbPDtf yPU\*¦oM&ZKksuT:a-on*x:= fȒؔM-$'ٯ4"$8K4f +0HAW{]~WO0 <_U>-`7Mo2\:dKz[0@g5r{3 ek݋~ݯM69k(vǝYO`?Mk `%ZtOʛ٠&_7AO `Z,~Ӏ_7$HpłFΌFHPYK`LOubM"&>_)yi|eܦ(ܟp̶U rrs8#Ŷ<2ViVeAmfCU9{hѦfMx9hiy&ޗ?8eI_?GDܻsrtw]:*\'$Dܮag暉;'mx!Ys~7di$ 墴2N~.k 걾bK]s9wZr_([S/n;ywc2xic(2i\%q(}HjTh8}|xۭh6tI4n861\Cv~E6ɷ~ػ Ԅ@]OH?]wh_UviG)Tjҫ.WlTh~@y3? _VFHMAcbPAw-*TP⽿NZ.s,Ϩ׼QzC4gDq47_Yza2a&v-*F! oX[^ŒvS& .6oKJQa !rh4x ) WIՒ zxس&LkA+dgklz IF7I JAd,m, }V6{?9F'FOaax ^cZ}AMNUXhO0X;Gf[="rhJ~ϱ0΍ny-WY͢T"U&aMiy\XlJ`T"W+؋Z j #+%h T5YUڊ"]Md6NaaxoG:9jJTn 8 Ze #kcF:FjьBh'˵y2NM9qtÝo*P,YI&s,9[ȒE+2~n<(!%_Dcax'[u ˟8jϛVNЛC'jP^YfXX;A%V;9}Ouqqg9^:(T|_2W"Bmcs,pQQɴwDNjDi7~a$n,_&R#i"X> J͊GТTa&G9;S?!_,tI&(gKR*s, *O<;M j+Db3ds!6T`MafP ##FrpR,EM ؋Ɯg`caxe>_Nfi\ D$mb #k|1|#$ ~F=^SEiŚ0X~tgkuN!JMA @P,N9ù=hLIFP3S\duB%O9FQߪ:bE$BDHՖ4s,1J&AOva.Mф  2~=MYd( VUPus,[I.J'EK> LIHb9o~*0 PgG_[?O;~77hk\iv|$| |-߸boPE]m1\sTefLF uCf~NȪj塓e8ti34TZ%_V>\d䃜b9狓Ӌa"}@әqǑ@_Ra9P$Ypb%dz#xn&Lzk*R<>2H{ J.|ag+g'mW83-_8$'TY:adHRj='YGdZLIګ$S J)\Ji׌(8Ј!o8J#!)KJ˺lFA ƶu)$rq+4/O~ꃍۮOG:aMQLFmGz|m`SV1TKU׽ /AZL>B!\dWE t_϶G/Sv'fv}wB{2˾y04{*IPUf$kfS$<9CΖdJ>g} C,d|ڏ_.)VcI}Y0$2:Xr>- di:UCbS6mĂ+)S+IAP>9LB2DZ5/!> TGT|]o,PTI8Җ9.%ǐQ7(·j[VHZM$H_lLR^KqukI;I+0@&O4 !P\QY;ϵOrtK]xy.ȝ>$Cϖ%HW"8Q"xw!eQ6JBs)y3R#0rΩ;{_˼_lF$DP-[@Dϫ еU(>(EV9!fG$UQzTDt@zzN)uNgb%ehac; E*tN /3RAQ*ؖ>^,S3`|J/ȁ4 6A|GySQeu;IUA>(MEYPoju*!C7;$atO5F3v]F~$zξqW]WgL~X{O?w&⼵#G\vub,q|_>(j4%K!Em9n8}1 LJb0hO"w.go:T1wMUs KE5l%LlGy̖Z5+v8GhZȍphub|T֭/ȗe;ҩZ#;P,s帖2ȨC5"_KT!FV _t γ\z=ESO<Ӛ!'SK,Kkҗqc,N D7HUT8iּ͋Ѭ6>fQn m?[;+yn&HxjlR-^w֪%C)Wu ݺ +zu?c[\OևnJjF~wc?ӎco^KuS'k$K-tSl_2v^S 6n0{/(8K8TYh?-ιndƦH} pJ'^KqS=./zV(1\G˯}q$D Vv>voeA7i#i-ھ7{Fr ٜ } @+cr(_gEΐ4H][iuU]]9N?t,t baR"m8rhjf]k>5|ڮnVټSS,R@:t9;qgJFc N#r*E#ԖxyD\/X!@tzƶK;R~4]my_Ÿ@a\5ZA}4 j-$ TH'Kґ0хT} a^.{A\HMMPYGSӔ r"ENd#6Hc@ؖ%c,c0FHH5: $S, A9t0ISH3|X2#-#-#H;k&+^WKZjUga|Xa\a>YIA9:p?7/>og;Kfa32eT0Rg@>o?i4&ѿ0gǵh3Y՞|taf`L3?SX-1%}AcGfA!jCST8$5 '/H@w"\JlB OcK5(K)µL!8z|7m|x(F{RU :a]FCx:Z;滪rEœ3D(Ph|MAwe:XPV[;"E Rr -UL[a1EuZI 8B'G"$:TL·AuKL43oM}J׋(bR_/;4{h T(Mjh7 {JqG15uW3_+s?7?W/~[4>ŃޚI7R0k"/w^܇3YM8LjIΖoo鶫 Au"fօ#чɴZ`,|:/'|hst;Zt ;[%hwAv5V&VtI sMU,֯8Hk|`c*ŘTLo]hR>a#"tʹ ]:@^2[Mχ_4iho4U쀦)s̈́.snÇmMZO_8.X^1 =욠J#mm-F`y6{EzXܭyD.\z︕ݮcuch͛tw7op4OWYd0 4TfJk[GsQAS]2c 9w? [P7EN"a@JQ3xz9*nBܷjM&FfuLJ'1 g'}QTtx6֧l/9tjqBBRCy $rS7d#9P1a&x9) Ȣ`@ rbrtD!9ZKm ڙA%b{{uMQݜ͏=׋ +־g8X+Ɖ ܬgt#G^hR0xaa.<ƘP(D_|S 핧1Ҷ5ov-`D,"EM7=';o'ֱF|O0y1^`惗[TA[|q㍄K=,T9F$㑅H>H=,0+C e{ 0z6}mr5Тַ+mvw|a7uIKIkqh5Zа@QAsglrD`p'h%Xs>NllEtMɳԮ3)E ӯbksӧ¡ 3GpIđ Z0 ;!bQE5!=/]JWE$,?_p Xa|ay>r`2/n`b.n}u,ĥGvwھ5dYn924FhI+Z\Gԗ[SZqqsyԽ-I|&@3Km2Laɠ*hA4)p;zoƛqwFf 23030uFhX 51&*Y~1uWNJuI[jߴydw B'{mXBU/aWSZ6Lykm8캼Y8$q|ﮬ9-_!Ejl=jkbrOoK鵖:ScRGTPJm@|,Bjw1ieѴGƕ)\pSv; x$n&}˱#3af)~a}"Y.GpYKW9MXՙ]مz:[Tu6tSn;Vm33~/n.& ur鑗Ju1s-Os^ݪp>˚J  I<, CM{*n=){m;Ds9eCv`ڿ3wYW*ߚUvՙ!4y9L0+ٸ%"@ F/pJÃ&`_qF=?a =0ؐ|BJ⿍선c+͌p @W8,NS?*fhR(nq] 1#q LqY,Cj`@XLjAesQۂu h IUp;oB!1qV{`:FY^1Tk%i7-q@KWlth{U:vt3/?"L: ;`L XLќ]>-@B䌂r[2  -,% ?}&`/!`kgxu,B(ң@yPR]'|NP$\=F;ګqw{.8g~3\n/sX o_H$_~H{ I AX3>OLyByባBA&6\7G"]/=bL-6R #{_^3\j,&X[{%cw!~u(8x5RLɟ. `h;$y`aD{P_.Ezhyt jxy3w/[۝ͫyWĘۥ2Q?"(&j3AAIdVEclyW]40$5(|bHQQ8I( ˽)׳.[wCs~)2Oônd%+j~ WP⧪B/&'G9Q62IVEFa Xm~]1~R0>1tV>!XVxrzpVg;#\{aFq88XҖk Ej_/ }4%qa!Ԓ-1fHS3QLYBa4.dP0Q1ihx `<88olͭ.jX%F,[%X0xR՗3_1K|qy8"NJJ1*u߽͇Wg/~|_ËWo>`>x/@|ḱ j.m}hp~CӼETM"xviCnh!5Xlhn[D˯/F`撛@2hfY:֚+$Еg~ tE[l՟W{E`"D 4م_òbUcl#Dw{xbqFivJ0\ .(nCYXT`C{NjX?5Ǔ񏽩3p4OE]Jia3"GQ6rI9}1̇BgͯN]UMS_ͻѧ˨M?2,Dќ7q߸{Wy7"Uu_`*aqZwݏf'kMpF7qD\&9Cdw4D"F=oٷh S.QgPI 2^p̼ h2R("5N1֔+ݭOp7vsO)+JiCdYA7:{"u׀G\0o[}se0rpHၱkTpɭ`)caP޽G=MRB*+ҍʨHLGEDrI"^+=NP1F: lp6HF_jRz\Tknlj?Rl0=0Pք\JKkQJ*0]{ugtLl/6Ojtm2FBo%w.78Ŝ9:2^ժ)-(507ёH-t^vV9ETS 񺤊 f+̽ew6|4tV?H|Of[5f˽| fnܬY,I8 i +b'x\g':$B]-B*@Bju!(D1z(P-B؝o,hӅSU; M|L r~rʫY:LY J$9tO9=E~ȦyY㇃Q9&9L2u, >l-YJy;*Ų"i.6fŻ۬ߤs.Yu dgBNHGЫ=,jqSUۍpq@?o!k5'IA,g9kY|إ|-frYp%(^UVBPئT>Ⱦ/~zV墧Q{pޅĂ:JY^yM̰vsE ')M.8bgJIqtjoVEcly)*Uu9ڇ% *W&hX:%Z!|ykƗͰP,;QY~ytڎ?(8rKAI O9dW)&Gc*hJ`}1Ȩ"k\$m$"Mq[< cү ҩcҥ(&b29$Ə2,Ruֳ@&$ Ѩ0ByH$!y=Z:g #,7ZːS͍"/4Ĕȥp=I !$ FYi[(L8\Y@tߓ ,deAJE=t Հ^J,(6z\-MWe `ҋ=IH h4D@^;A rO`VaX:‡PH=H(VT'}>Dx<O'}>Dx<ȮzmtWiQUiG0*̳ϰ3#ͳ6M?7&}Jo rbyJ)E`ɑS{}j>GڣOѧQ(CzdsC!}>wH5CZ7]IgijNj:HzXVYF}\81^k Kq0H=1U=zJیmX 1,\(wހ< T3`MLj9+j$Yݘns@\[ޕ\y\dX  UdV{MdAriٻFkWң1Ĺ`P˩3I5jEEJ*RWu|g̾ hxh7('Lϙk%LO`e^ :nU^%e3\(sʮi8g\5 *ơ)Lt%)2 5'b4c}a6j͙Brc.OGU b,9A]`-gw3Txl& YQQk漑Bp;玽j٧/Ţ(HRr\y $ 1 R9*Ud+A]o.Wx|8NCvlqpU%1Vq5zva>}RΧɧCqv_NRp<{?6& -ڍSːpYS κ\,rga`sQɭT4u"vGB,sYVOz6 9'ƕ\L£bLޓ_S\ F꘬gA4) !*T D -kOt*5枵ګ6,Sob@Fa;<$pYw=,tP+5Q'퉱 ª M aW>P+BXJlT8hSfBY $# v8**^Ƭ(-PA)8Iiʣ,e @Q,;MEA%0bHHH;$^4_̽['B!LR +@_\\ hpȓulӾMluNDBCF PvZ[Pԓ5zΪ:tyzuQ3*;0, 4~DC D+NrO\1,U(\4LjMeD.4nU 3x%LZ{FmiDQr -DM׳ahP/-WL}{[̣dzv.% Gdg2jjpVa1Dn#GXR,ف9TIp5Z M]l*^[V<4Lw5] N$c h ˌ'wGǎp]㮒l2X;%M2{듏JDBcҊF*lR[++sk*K4G] ["DrGMd\OrMp(YdRUTH l+ lcR &Zh+Ecс4{\eG&2"3H0$o]e0Sg.՝[,OYmy c2Z4s0LɛZqNyx.M;G&xFY[}7HY)XOTPNр2upƯpCòLE0:R_'.B~V9_){geYW/7k߻jJ,ͧ HX"u2 ї-43&c >=R~C.6/npz?_?̇wyן@'܁9\Zw Kww<`jYjjo557b9) g^^{ &;vek ~qq؆ptqNӍDx̮qWF_tr>Y%u檘"7 ,\Ď!ʧqӧfƸFzJaXM-S$%qc`$<8 G]6+#1ˠ-Y?s'_pH6BC^ŜA 4r-p_+c,Kh(L\; ;Fs{GV''<,l>DL Gs#p4Q\S!T dGf>[>Qŏ=:X[toux+&2o<γm?s1QWQK'U C;WxakeQ}^n-oJjN5.p $ϭemT%@}@=ƘX.]Uj '`$^yu}Et:O —Xzȯޑ=!,q{} 7-LK;E$aʲFGDsɐDIUjH(VoXk)_t5 k戋JRc1抣 Aqb !Qj4lT#/gC0收uC [$=ϭo1xH {ws"Y/ fU>y$3)b^p"fY:ƥq=:YlC bQVF-g!5,i5{gU?y꧈@BmG'#B rd&gQ3@\MQ yT ' i!ĩj.S2 @M$*.b3b]hWDD}6 ?/qmuw#mۡ[ϻqŘ6*ϩݲUw.F#R<eh4\"GEx!(c$!iJ$xkPEpT&o ,ʢz gfPAq\l9Uzf<^{}ŀ@vE,dY(nYN>`U3T#%,o .cCR t2V;+#ّO jp_]5Ǐ!n\ C>y}6Yf ҫ'(y}^/|FCELj&Ӱ3D{t{Зc>]O)fUTܜ$nfd 7==onv ?J?1ˎ=+»ri]q-Ȭ<07,\X {~Řh`!}W EE=HV~q`,-2>o4$|yr6\mmܗ590͔̒} bs;>Cw#.=+Zѻgs7]ayyAˁ^Ҳ!ߪK-G&Įc~]>1G 21vY<=rҚ΋(TrV24 T(Jj[qR7 /)p,XoL2D HDځS:Rm)2e=exBBk 0G`\u+QզU'|a)58Zuҁ5@F~ͯW }Pt.(r2V ֚ۛeY=߇qFc'j޼Q͏>Ґ#]~WE n{?~\EWϳ/Y@}c2ϙkr?fu*.P#H/ %pθ PPx0NAt%)2j '&RAc]3/`fs-E>_; W8B2k.&AH*K[LTPκ "] =˷{!ºEݳ~G2%U\v9go ڍ0WxlHmg6hbԈ`1E-oV8Q ZlWpxFFXɌ|#WGz 7)ʧr߉{9ח!=9^BpJ&fZ>罥Be/G)RYn`X~ckm8^,q^L_]* v)ho$x):%TԤAe;U ǟ%?|ZJьKn&&6i-g7D E,U7TNcOESG{泫oO_څZ49?1[3{RwޮHN{T(ʕ)S{bClb!W`:&YtMJi MRG B ZDyERa;\|w[qWmXݾ&EÓfeEFA DFP'&&,RD6%x^@aI*7 TQRJxLHP e-/8$Nڹ{E@5_6)MyԂ $}0(9PeBR J`\VVvHh;{iUi/v`#[t 1%@m P[Ԗ%@-m^Kז%@M-S[ԖpVm P[T`V`֖%@Em P[Ԗ%@r%@m P[Ԗ%@m P[ԖU-jK-jK-j*2LPg|j_ڕz`48JH㣎p7V8 }!);@tO-[s.Se{џryzQQVE t`4q0Q#%ZXSrPEUpp9ҠnȅƍJwϹIQ#˴4EK([Ξt~BUxں+E+K}u{w iY_+3';5;z>m.Q^?rIHx&š1`jgCF;pE!dP%h/4at2z ?{ڛHr ʗ$ڮelzFը xg_n00. k$όn}NsN:%l9#R"%ТTD2'Tfi%13`@#XwMY7:4Qak #bc>J=Lz(՚?FlM5HS([@sFX0UH%&S3Q !M-#Q)"9 ;:I}6 ]qWJjp{~0􍫅H+Xsɽ<"0 !iqPRĸj ?t,Ծc{Bmݣ[3&k[{]a@rN!SjRP2YÇ0Ul(؀I )k2`@ QB0!ˎ_QMu9;|Fk v`f_WD}w=\~TEQLtT \Կ94r&b9inJ5ޓ1{Mf_kwݿ'?~ߴ|ZgXG38oqnWE jB>1{PWKv!]DYg3чѸ^`,|<>,&zn0AwJVjk#MBXHjm}f`l W]P/¿^9h~~Fw:s 1?mN o_ޙ$=?]/$ $G8#*/4 9TIcLT(bT\qS%x?/aĝh}l yC%1x1[AH 8\XStި3wgC'| H2"=M7÷snqX(s (Lhqyg l1mGބW\LfbJ+Whm!N*KK"lYoGamS+0'>81U*ET"MUVy23wل)|&Vv(vX8ҶzRߝ󕹟ߴ3d,ˏnI~lc4b\I0(x Y0!tq4Z &HBNcF1 4wFm6*G:XJB4] ;5x+κ$=>2)t<Ǯ7_zŻݦhxbYɲdT +`nIFRü5ɑ"P' Rj+w#bT` ]?X1vY4< b6HFjRz\Tkni%允,䇒pzKZo{6Ͷ 3!-.)}Cx|Sl4Xx[UŔػb'/GVhFkdAZEDP$O`B<UX-zg՘IDk1h`l9G߬q!1F< nwo*C7{/5u&n{nwëOu-U)K#ɝ]k(R憎I@h?ŭ]A7ńvB-ȆR)j0 9;ލ [[ԼRr<$7{w}Z?@ǣxX喝?˝one;,Es1EC$tʵߴw)I:ԇ"+[d\ظAŕsphWqޟ4<0+m e*+gIt_LcOBV,lW7k_<)&h*RY #x,R,e:RIDQ!RͽV^{YӓA;ӍH=fm^jӁq;V.Vͱ3#Ea>CHzsc̱j> s$%UVR1|%gS|҄^@t!gw*!" EZ?@Ȍ7ұHu&HЧs81NFQ5JqXtoE?10 fb*s:F `w)Xo>6ϐ2A0ɵ r )ALŘ8P=t:FY^1Tk%ig ݔw[{ғwm}kJLϫ'/Ӷ āPO\!YçTx z]/wBn[x R]/E z]/w»^x z]/w»^x׋QWx z]/w𮿇x+qx+&*%»^x z]/15/%vI]j»^x z]/3)Yx ,»^x z]/w»^,ˣ-bh?~5~G}A&L_j`o|od_ }lM&S*|I'3ʏ7/FrlB|APhbrvȓAmr/^e&ѮEJ؎_UaVֲ7Eo.Rod =!"=AZ3E4^JJ8%Zb%0lɡ7i/ nns7:#'Ղ {sv j{xFWNt_*e^h,Y|fY Ы3+VQWVYS?NԲtcCwNJuY[jn<;?vk\f𠚍mUV5xYm05^tӪ3Qr-CWV|oy_hWghMO/3tQ-9B X&H^ShxLFGTHQF((Qk3yl=9䐤2e Ԗi`(^ttl E~]ڇμq *1딞f+[iWqޟ @*b,V(U1xe29șARLJ*PŬ85r\z,BMT(%>H#x nR#H.`H5ZvQ +5*Cz Z{1[,&7V^l9^:OxK硩+^Ȃ^NߛWL*]<ͱ3#'CG~`#b%@jayk㜔:Rj+fceL1Y *Ho]Jz 9N a{QpzX|8mlOf[֙%"Cx|Sl4Xx[UŔ h *!;BC4cBB*mTh,8h=Q'jrѐ#SdLG eZ30{-#c;n6k{pӇxڻ><^j-Z'pMt}waj]g6v>d^ޠ:{oWurgךLw$ ޭ]A7ńv__Nt] ͍{v]ӻQcY(5-7[?@ǣxXpRTܲW<^>qQXbft:4A~[-PXn6?m΄< mP Ƙc|:^r쬭" bJK*1*[ַϓl7mf~^z~,He AA=M0{z^O/?- g>ڭ,x1I^1l RsT&WڃD Nx[z4ѐ"v~՛ [(Q{ 7FJo}g *[[*[Plrƨ(blPqKmE9o7*ztk^{qi&-Us,n6EsO~c_okWpsc-~a rG[o^,q^L_]* Qw)ho$QtK݄?hzTPnd8.i-f@x͸^kbB vI%@crim˴sYZ]Ob h| qQٚhi`Q#$bQi*$sOO-,Od9=R:?&H#76ہUWxދ58 d1T#5,XbhG"A4Ƣ@Un#"NzrU L+`ZK-jtE~d(bd)VCAe-}qơbs,-Aa@漠D3%wR)%Q +H2m2eRgʚF~~=ײŅmWAZfPvl9io5@ N.i\( C9vm0e߳?o\O*CpBU~=eͺq">Vl.!UP@FsdLNεp.&xy[E \Y)XOTdjQ2pݍ8!pV*aeT}Z/#!\o_+ g]Գ3_˴oߞN/Ƿ@ f*2Fh{:'aRCc'BYCWxV\d =ޭ1ե{u=zjzQ~팢VݬK?Lln%C.W* 7\L?BZ%!׷t kj.Y^,Q# G3g1z>MypH*#[զjJ,[Z+?HnXhrW%_N fw+XU)|C ^/p9_>}xOϏo?2so>O L!AqqQI*b>F\qN(]KcJ=pwc[ޡ_y4TxlleyjmFY:(fo|w* *|/%p[9n}mha =§EA 4g\\V5 f'ͺ:=`uBB1k+j->WN>Ju^Gj۰w G'[:mLׅVr>ôi[1w;:in6DM؏!)oI3fX,W8P]8>>7>NJo -u.FܡTc4LP]j뷨 ݄1{ebfܥx ͏{va;:CW=s34O7w];Wøk"F [Ь,9HUxi a,L`"g?4Oι 4Hq¼F'LrX'#gJ)O8䔯IZŮ(o^Pcp[*P(2Z(OJgeZwvt/ uE@03I_oc9k ƀ[wpIwfl[Y-:3(-UקUӪTvJW:k+ܻb2n lRxo%\nc7{l1oj7Mo q7y㬛?~5RhvcYP]0K;Ff4ßݕ!Ya}} 0UTѦk+.E]i癮x% A^ea߲eVc4ͺ>5[zmMrL<"V5[ujI RD]qV VP2m蔑3cn>%lt@4יźkzai` 4tzb, 2v>]ٞsG~dx09{4:xʋAkA'R1ORq0kAQk2,3}fh6IH Y=J )3.$JL%u ʳx5uWH'9w @3o&GC$Kdy+[ =}^>XQQXm$GQ< ۴FHK+J MnAJh|X# Տ-kܷSl]o ́2Jn?w4d|ѨK ) `\6Ja$h>;pṰkƴԨ$Tq q+5xRjPXbPG߂6*b"AdTS:Ʒlh;;Y|>+;ܺV'V7d'y2tء#m}!N1.@s;|.p8i-Z=)Gm"Y)"G'CF/~BZ|ݷ?_zo *5E ޢ\ Zjx*b{ F 0M\>1:n'񻫆W?{7?G_ bـ/&?{ȍp^vm^dwwr} ڇZ_ksہ uvHHH9͏cAR*f:Gd4idQhP^wmZam^UF:Xjcu8jGMDPaJ{ސB/ o3a::%q7C Nx+2A[aPjRHB0"BB'W,&qz%s +RE m|uܫGbw!K 01ѧV+ss (H o]7xR"vake;ނo;HbJ=i`VlNKTplJmdjRvXhMNjTWg3;EhpoLj6(JROK ㏋Io,'F[Q%r-a1}U6zsr{+-r#ɫ\ߢfAD_}vHx&8kTJ:Tq7B@BQY)Q#*A` Z-x1%+&wY\r\.T{+oBybZyw/G90sxu&cёg(Q%G9pQՌ`MN)JXTF6) dsV`s5c<`a=R5;w UMy@-GW|&?o`yПL߀Y;`2!EL@`d^uY(AY4xNK!aY)+5WB܌(ѐ0%d ,+ !!eX$5$^ȯNސ9V dEB *n$ e7\ŲM;I$}Jګ=EE WedGet$GJ' -"(K՟ѓNnB2QބikoLIGb<'xtr\|>{YFj>S g0 A? ༄Bo#RJ('Dl٬)Woürw 'qvD1;:EF5;kѡ32r)"$,1H:aJjpva*M %( 0FKXafUk-W^e딎ʍ= Eȟܽo$TyߌeD9 2tc cwYdCp!Zg3I/tP`.Cn[Z9S"RIp EȥMXSJj铍>[`ӚA'Hu9{՞=D6(^3kV3{?HP0WJDD.p}.RƘqIz΋XSiTar#3_C.3S&KmÓLp&KkBθ%g욝]x' g))3fqnCheRaXU!Z&\ 7} Ǐ ~epZ9G>a?"ukbhf]7ӫ;;&gQߚ8#IX![qYޔ?BQޝ{Z͛0#9 s~ىnwpj0vnEԆ?Lgm>\jr8V1֓=)nD[7Ib1p4Q4e% _=՟ߜ`Z*ý.o׶{59uR2RFJǚGX.}3!{%"~xo4 f\>pQ5^__r/?>ȅ>wduD30XGx|K^'tkuM-V=k]/&1ʚ}k&qʶH_{?|n֟ )Y?ӭ$_@WQK˯аI̯fry87G%*xf*xh _=e'>ƃ>KJlN5{3hQ7K070{ v?sV*^E% FtbJi ֏4};uDA% cBIBD;ё~nf)O|#ZAET1BLl6@+:%'TU.dK NoOeOc5=54>%Mئb.8V`L%8i慞 edWA#6ʳD2 YP󒢤pp袤prQRTJx 4K`xv䀱,>~~_ /sAXG<&5usYq*"-X;km̢TL a2׮kXm8Ѧ:|zKx3y=ݝU?r8v#A#A,ԇ>i89=x{~5=0Dl8( Č>[9NW4R՘?X@4e]ȘT|UJ)F$#La̜;VAlZoF4ڤMLx2[>#'g۵H}A#ARL/%E j4likDڲv w*6x9j!`?;*̷#!y2j\Zp:y)(a$dsϥaNڪS(dF0KnCBOԥp Dn,UՆ̃q|}#,bo{v͢2m3tzW15fz9oFrHt+2\_b>%4 2Ca\TP Ah=-șfʱ,(Gu6x8GTޢ[r@’#ȲY"G+Kp8 %GinK[{ySgyϒ4g9}ج.9fĵGjEY6\lI]G&[ŸNHur~KXy8?}4EQ8w&f/@h2!ޤX m'M?==W]Xr|0<'gaSr/트t{kגc?]ϧWfMurIި 7w) :['㊯_m&Z,X[FDgE!r9ߝdnN0;<`nx֝ ŃuD߱L]ԲkjbJw I&6le0Jֶ+2htt'vZFt|il@RQIszQjnhF'=v҅XdSd5ߝ.iye@{҂nYڤev/ZQ8]\dL yt4Njgxک ֌uD tҡ碯-mhQmmPto''b@.O!Y 1Etl˙i%TƣqJB4)#q 17J% g$\OuQa몓NJ.HB:CI{Okƙ@7"HjO<5YDMM'YWY8 5!rNNz^A@'N&V+)Ns}=,D@Ef)QOI1q s, i: HV{L޳Yk8hЏ_OQs2l+/sY ( o10ؤAƽFh 0gUe߇ Gk_dlљ\#=h0.H"Vd0ZJ>C #[& "2 >#`)1 dd&ۇMgQrϖSf}2Q&HKz$.@YMXiv pny.JZȣSNNH84;Q'ߴGقVD)ۃA]8H@iI1e@m1h}1Ґ/TnQ&%8h$])CCI>Ŗ@n*H4Bm`N'g LV Kk9LdsȸY4JЧKl)-$7`C@jd2ԀSh 6:XPk)8z;9Q8!$Akr6{XІ:I]A#I,YY. Ge6XSP\}+\d3˜; * o7-߆gZ8З$hVCqnc I:pkSfM I9V=$W\QKWdiw8sLƃ&Tvr;j-V!F4.ߝE ?VjvGՕn_x2/"7NNnZ#.v}TS|0`E v*;o8bwmm{U.P,jNowGWt߂ǣ9aCU;*nXX8 Gm[&h[nj8)7m?~p!no)CqYKrmG͓3xSʩsc̱|> p^r# $g\Kr1q" MLH"+"RBV'|N0\Hrm՗/d4Ũg׳LKvt1Cz?:;ܝI A`3>m321Z GCMϼ0oޞIPd5hH[/pha;H6FξVvL3r0{7 o3|<=uTܜUR }rim.hyJ g97BVoWg$٠`OiZTЇt z߾o*UQeMX(-yF9 tP[tͫſAڸ?Z'a;f&_A]gm.6Xk9<=Ҍwrtv;v,bCڜZZ[V{Wluʴm=8ׂj}]P}4W[k%sV/@2-gu?IGϳM"1kY.)m]XmM= gi("QA'.) -AQ_e sR1L|:lm(fbVRfBş(>ɸRTkUX;^gli:ƩoصƭkVzr3Q[)TDd *d%D\r)7@ʘb&M7/L_mڻ%*hVǗ˙n^쒞xM3?x% ;5W"Ayx=Ʊ@Ee $W:` h^( >Μ|}'0P%mr7uA\+yݑtX,.~mv:+Bxw%Xu0M(4IIQ1aݮ5$+RDm+ѳ^Ql7h.cwDE]f,Y횇7`JRXoKQW&َ5&'ࣷI[b5vD?Ɨ~8fXD8*]%cw!ͻR+)?}#W]Z [kfakMW8bN/k4 g )v.kDsGG0g-Ld8r)=9y[/Q-urryD~׽3QNhv+<TNZ$*KG`iaHQQOWcrroDu}?@W͢bZO*w~Nu|;;kݥ{Ȥn`/ӽQRL@Gm7 ,3`S⺖uݡNy/?*?nd_jp._Lz7꾗͓_p8=>)ׅF=LQk %I P!`Vtͧt$LEt!"'Uvœ6Zz9pDi:13 |@N@Y ၱ>rXAygcZadGg($^2#YobA% 9Ij$f.quc =s!9IsOb %UG>N)N w,/7kͼ7o*[Ys@k|_yq2/8b/':>SYw!=bK6Ut:&r-Q:@1,/ֳ|Y~7M#ȝ| P1LՔrn-B(?^`PJ8SR  !~9+6eQXp_>s\P-S¡ZRKpJ1PifFގ?/fl(̀B ,&gQ8Hzvh8Rc>FU*5YPPlq0_ sյ/Z:i j~.w/ 3:+Mߊ\8empH hV<@7Z()q톧EW`_\ˮP49TBE;{FNRrz Q=vA-%͒b[i,W+XV{%f4d DN;K o?7u[x=Mms\^ׇ HrW|1զQ q sx)nu|jz7NM'\O^z"|{ Lhx|) R>uۺms[}n>uۺm~`Ld}>YŦd}>AEOP'T *}>OP'8" ;}eX6d}>Fc>Fc=)d='m6dF;w/XFLx^0bq9Lz}1Ȩ"Oכ$S*4 ҟ " 2|]٭C)Kg:l~,tȗ&]#z!Uxd!R&f֔1тFQ4`pHem4`^p`{\t0/mq <88HsVҡ|cXcd,監䛃<݁odA3 6Oi8}MɀRuk+X Jl^jͫ&ˇ3{sͲ9zx`Xm =~8p*Q?ңJJf;Yp#]d:TfYG3ujv#[/}ΌJFtQfWC:cr܎#:FMW0ELid9:3sjF [G)W!B*x!Ն _ZD(D$4"At pVb&y$L^jǬQFYJy>%(yGgCκ7xIGgakm>drP0mN ׻CHHx)aOHk&:5m04,͋-@.lFpǽCXV*%uXYP;O4#f܇B<:<*]ft 6wD@ZHd 3âҌ;$~ 0B'L6swǂpж>f6ot$14Qak #bc>J=Lz(՚(䟮/UCV3|x18hST2]~ȩʙȦ+u8Ggni,]mNyNl C6K?;.TIYpfC_nIށ5ҎEgߋLC $:V{`ָPbĸyڃe,Nno,{OEjS(b5c2-+0"8UIN'B J1uf6 vx}:< CW+a@Yi Tr\s  %t] 2pk'* MŻ86>J:BT3bWU_!旓(bRt|RC#`Bi*C߮@W|HIWPNTn*6#Yl|C⻫:8XQ<C9T涶Q6* fa`5icKon鲩 A56S'>ƅL4x4^Llr0oA7JVlji#MBXHjy<-nZY?ç8krcR4kY`T`_~|W?N?~o~w߾DoW0@16vAݻ W}Vy]5 -h.G]MxviCnh6Jn^ւ(}t5q8,NҡJkUqGc 9Z(|sQ$+!b[e!QuF{޵}*U%Gߪ:oZtڀ{B ԹbQ l0}Yn0/l8: `Ԝdsk,QV8dn+k.v^b4ړfWϽG_NmP$#'t41 ‘!píQ"aDglW'9NeLkG EMGudXDMkZ ߀%k'κK#,8b>{o#μAϺ,`f%Cǁ>`pH=Y iSjMp07qD\&9CDa4D"F=Bs ^3v{d7T3>H*(BE5Jv}O. .-Pfyx][oɱ+_`_ݜ y81TK)ˡl)TϐIqHJ4-S3ꯪr3^^N$ v5ⲮPA#ALoʅKbfyKlkףѰ2aiK:ԊPE, M&yߕg?O~ $ܾ-2oHPAB+*ūڍpD(ukbi~Ǒyp W>sz)l?M3U^ ë}t˒_ksb4B*ŃPFUO r. A$ fDaF ͵,Kp Xq6y> J5c2(%R eng?2Md4&Ž_f.M۬Wt4Kf 1ۈ}w1)ru[+M9:)8( "*w ;)͚ٖ)҃l\Ho=&L{ Os򥈥C-@ģ+&A 6*b"AdTS9i䀺&Ѕھrh t={/a؛be~!B8z~'ïa'j}Se^ų}&mJɡ.|67n:Lo,6HUxi a,P1[<':8;2 |2YKd3I%w@'|V-|(!j %ՉKY)j9{k z('UDpC|k]M}|+Nh]ZCy{tL]83̷36.ףsVN5kMć=+Y>xaa [ibL R(GéJݨ/EXHAH[\į 7HG ""[x-B,|k0zU֋o^r@2D'uS ,mSR$*ɭ5vﯸW[{w1ıng9Ȥ|X*Ho6m(ozǁz6cԛtEvrއ|׫woT|KKE]uwr[߄'ȩ A&$o^яn20(#gA* 4>=p'. ' Dse>>:bzPm)=Xƽ4]F?ŢwV/K|X%*vosD$BhO\q+g̞,LOZʞ[3;өR2q0qEğoZ|w߂I :M!s E 9*Q*.$a$ *Ɖ VnDЮըАH1g;YC8ZHΔt%A .]z5rv;*L^Iŕ᷆X"?E'ojg\絨ռ_=Aɾ޸1I^T8IH*K[\Q" \ijj:myg+q.=M7/&; 0x.grDv֒#YKOb ݠSbXޤzG6(b|f0EQ\].ŝGTa]Q)ǹ[Z F>@Qz9xs_TkmcÿyOHX"l:yYu>kBzݹ^q̓5&竿H\ (!:i_Xitؘ 7vD)yj c3b9:3jFy{r t`4qKrOH'TT($GĶ:\h8T 3x%LZ*)I^ POZшzBy3TnemMϵ5;GrE%9`&2m4X&$wR\RGJ:Fj岭wbdRodz#?kBόR^ukS 3jsټ-"0퍾 H0$? ̻57P]?PMe~=UUL+z6aÖJZ1{^|e~':udAn{q$ҖϼRE_֩b p+bC<4مʧq9fYت#'c QJ=J۩:p~8?@u v¿41 P'e2c ڲ?5'e4){Jo[s V?{WqخdɹVB)lHDŚD0ZSo @`qZH=3OO? 6`O%+$1"G7<+$JuAF ˽$g(R{$1QQ uQ. Hb<ߤ-ͣ SgPI 2^p̼ xh2RxEj.b)WZ(S>`p -C}|nB9۪{0 |Iք!io--}M[?tME &AX33%Ynt9+UM%~+d^a֠z= kxZ9i)wR) L%T5 ?wyr >SwexmEzPލ]WXjݒ(Wo-`jq!cB$T2΢)fÄK8rÈ#J0Aqh5ZL̨3hQ9`RDt45g/YǤXb7yle]lSiֲ\fYYXqv;"ᘥ-vFפ @ЁrJ,޹6Jw0&[rzl m:Rz2‰4Mx` &'jzKKV"1FGVY@SJx->8ubCT]:1uަu}ېuiٗF^2C8ze\[ЗcҫDMm_twe{o`)mPp2f[9n+Cf6vmցm&Ssйtppy93^}d~=ЃҷbL?:߅_aJgl" eqY. 9~"87B{ e?T{5{XT#X,yyك;ݣWU1@ GFL;j4J rP5jޯxׅg?9{&L<r3y)tnq05Za7#E~E7|^}ښ/> jt(Rӽse4._b.k c1F1sI)$ŭzl_llo4PJ|{G6KSf)ӑJB$`D0qB{(mT`gwvܜ5|/ z"pgT5QatqդP;_nhWn-:::RHsrRʜ\0s,M$%U3|%gS|҄ނ24БAePYɠUo2eHwNR@*ap07ʏAEǸίgqDZ LqY̙C:jXXLjAEq=XϞ.=e`k.Sa1qV{tHbaJ[ SWVwvwuݗOӕfǂ*͢5va%A+hZH Vh[7X-v4 -q+YrAN y"˫+}(ÎJPO߇A6}np)&J5,vx$fN1 ?Ba=ݨMk? uP XĢ_C ъE:3Y/|6tN*YJs)Ȧ!/i?0QVLdZ><vA&?}A e4i`/g*v9kÊRaOt`i#eFk=XuRT@KRoӃZVtFn|t*+ wVZt֕o<)N7&cV=f\|.W^ig3fFabS2J3!~9_>~kvz  sL))28-"b Tמdo҇#3X dO^ݛXIđ昊tX aE 2_O۳( 7fd HcjJ,rAӀƤ1ZiP#H"ή6!TMMb#bP`"ZR|I %cw!+ekʥfd3rN0)AKc(]3 8ARB+=i < =>O{YB t bq873;6s2 Yѭ4Ddg`T8j4XguNj@9yd%m)=ɟȎݐ\MbcC`g0BHH|D9nDiAg 앛] ų7AaaWS" '4v$D]Rݏ_@-"t$4"AtL1d/VrYͰ<(K52wkRA-2͂"yh tCt F[_ʞ.u;" [@SPTD#*8X2]~8P (O C㞪%Sl~he]g a vFE]p18}&.pTo{oç^c 3[@sF!Tyeé`G%@Vh8GUf.o/c p(pq)Ϻ! wK &V83]ˣ"'撳(\QoC VXE_Y>2 ƅ0wITx{<բQ>esd&; @"^IA) vGޜΫg@0Yi T\*D+.+eE7~&oCtG5s LTRɾ4{!|J\}EQLQwxz kL(MŤk;+a )"dP<锟}SX|Sb=nZLg퍹bsu]ʖlg`~aצ7n7^1jҴsQM7uՐjч^XT~u,MgT98嵎;YWk]_%FLjh0֑T1KE aY<ç8sXQ(tIɿ,+g3݇_x}ˋ]\5:߁FEimMV Q?^ySUC}㪩bkTMQN.X}̖Xz'{k@~~o:Òq3Hw<5KRHȦ3?h6$xUE|@b Q!6uEH͵61H"[il N3p>wp=ecȒ3+ȌuHsIM~=51~FUt /|T> J\Jx#Y:[nߧ#9/Tc<{!XMc 9X(|sQ$tyio9'0[N}#͉#hcG'! qE ^i't1`{ 7`ZݳvE07] 9uQW>e\pZ0j 5(w+L 08}Q}SQ A֞ygzI_!~()ky0]^,vme<"%Y!Y$%%)yUULF%#"C(`uB1WcY=! N /sAXG<&}t6e:C@ [wژE.7LfQU#g78ZBT:i7 "z_ijNT-t&AA뤪_* & x{~^3Dlv Č>[g:sgR]}J𧢥> D 1A|ҊCJ)m`dIC>:sTVA-ks+lKIi961^W.v-t֭C15~z=3nE֣Zvw\>% JkA;dL%$ @B`EO9S  jU4RG ޢS[r@’%Y"G+Kp5rvKZ!Ӽ)އzb=n)Zr]Y', 0]ݲܧrڻ{nQm#? ggG?G̸^~^]s[1{qi]xzzBoR[Ӈ|B"G{o7u8^sR}孭Eӿ Ab,&g1nP4NF &;|>pO`ayTfɆ4E1I |2S|b;`;ūy.TZڬ 5Iފ+Cgp΁PԤ;q8VrT~y3l ΃01\[ 6)-^Zc ΡΖ*UO%$|$E8}EgrEkb($+"+Y謍d+:#kj7ȼ26@>A+L9LXĬ&{/+}XV#ǤruY?E%%=JH"fà9v ei0ʐɓ,JvY3Y+Q'cNU 5%&HǤ$pHN8'ˬRӄRnJϴm4>Rk]oTe0V56AMQhĸ4BOWƧ.꾟Z= _7qeN;B6z{nZV {W#w?`C8ZlgY/nL]Fbkɹ5vzvHDHr]"}}9шdЯ&Sh0(ۀuJR-pwH;L{8턐L!cAΫ$Mv#Zzd107wEUAZea ;ːq"\}IO0O te\5r<ݜ6FÛMPi SYVdmQ{v wdϲߝ?ηSw K3zȭ*uꚑ/@hwޭ\/'/rf6b~IWEε]vip*Fnc|3|yGOcyXz-x}36q[i{*{1̴C۬erlN U/W7rf]oF?b LsoveK`t 2͡CO੬t󞳑|n)g;q *oGWEñ*yxP)DfRpJ0LZ:%>Y:'ӻ-=swv8!m%`J]@ lS/DiШeʲSS Ҡ B35]2,P4 YjQDkWxA՟#ڹ#}QQb*&֖iTq%HH23WeJ22g t~q?ڜqyO:BJgQN7Y7 қ/2cމTBȃAg_ګͷ IyCmlќ CЩ!.\J&TPV XFDPFf&) bFźz;qtNq8z uJ vO}9T]36ԁy!a9F0&T8XspIۤR,;"s?Mk`lhX2]}^o;DāH}kof&L~IæTOIacmK69.K/Ty'DB$dSVw^aWz`wRyMNwVqݷnXݛ>Z|1 FE"Z}s4Yft)y\$06ԁkBVk˫THnJZ \1%&k[DjV,[x5V㓷v\2e+zVKQ*S9R:4\ёg(9$ pjFmMN)J6e%ǤHs$"֌٣}Vr]X3ު ue]:]xP]7xsg71yy?~s#klREE"#{F ɴSt2D IIТƖVfe(OEmJmI&Ǭ9bf95vҒqy.ZwUkZtZC5TUV1XŁWm ,jFM]!eX${IDj^8?KNV3XΘ6@*emLD.RPF&Fz!$2N:I;}h ;PT+!>Xm1+]҉J=+˗~-uv^"/_Ӣzf|:2pgo&=?x}x@.2"oToeMQ-7Q/+gnp.P'/CR -|c|V#_VqkvRlKl:+/|)~ͣl48|k3[iB2oB57eCFm? ZcV9PTه]aXֆ(72.@ǣ@SFhŤ K+R 8IUT&r,'>@T[o uB!L*KP@9{=p}PrZig~6j.nWGlE_ngrW繄>N!f cARddeg-:tFFn;@+RLDR sc;d }Ӆ& ZA3=fJJ01J'+L}r5D5-$`N < ^+-C8(Z,sVuja݇BN-V*TĘQF.8AtNu< b5kk+|GJKR$c$91,MIkJX%}gfd1+t'HuMg!+BVfax?Dž_XԬpO3W0 Ys'r:]7^X1I%}2d#3c痗.ŝwIRKޟ_g Wɓ V?*\pK@q2˔b.x' ɔ]3i5CY9=T<4eȀ\OKnz_E89_W?>a[||HD+|&קRK  ]N(/M\ ~|1~O{u4}R-{Lѽ⦧_[wfw'{Ƒe / qx6A& "0i((9 MRd6)Ke Alluݮsh}a f'l^gs+fn-A?_^ ǞI# #ƆLa+Kb.Gai)Ώ߭'zsv28[GluY-g^\]XC}|z֗ :'>9]ncM9;nj,o6/OW/ WxW\WϿ{u(Z,>H9xo [ CK04~nԫ ˸#k!q t[ϗoEwa6m7QsCz'y+f~ܖ-&A?ߖJq`$.C :ɑҘ? 6f#85=u ;k0ޟWccRUT.P2ysR'ſ.֟_NOϾQML$iAd@-F[xVa^K4yJY7&dAEr!&;ԫ#k<LxjD,TJ>2ޕUId WZ—}&Ԣ\\J-O3g-ć<ݫ}h⭇-+~|E+ۛ9%]Q켼^.5shebo&!^a*4)Gkxr$!Q&GZq]9=LJIQ &&KB\Ѐns?.sʾzބ|+|@h䇂KVLvOI<m*lˏv@5͟ې3`N)bb8YR\L*"fW5O@'@TD3Z_aꝑB}cTvp譎Al:T8'*8!)3ĢGCO|57F:(E`XOxzB waɎmWhXcH҇YZ⦪jI#ιTL2Y[o&~{ R#{&p>.g+Ə4.0Ga@7R7F3JqIZ+lqX'˧7P{ڶbJ[Q334G`VAKQWR+ x ܆VZfR mD<ƊW`wy="X\L/VE5:[6(kF'/+9=eރYEi žYgp#3Oo8r󑙳.$.R p>B#A).&eC1oɪĪ!R~xfUzǪ1prY:l_ ]K<)sNH\:`A?`K*1i=¶*,.$SNl sf^C6]3h Ge.3!D :2h{(!H\]2\ 2[ iel)1k@Ql`aZS!MG283iUAŒ5PtUh2M%sp2x)sjö*ޤ8&, qMA~[eTls )zg[ [1O[TuLr抩ɔ lhBAˣހ2 o),s J Q+EL\:p,MrLF@:3`vLY!6wbF θx\ !o)lͶfGLS4vAd I(&:ʩllK@djA`I p2DV po]MtF%B7*,X T3J s7E]% No 025"IX^,.zFZQ $WEQ8PR6 JaB(50 hwP!x(#m60Ze vP d : (N6G1a6uS̨D#+硚D@ȉТ pd*h]qamT`tj,1Z(&RP.ƀ)E+Y‚yipYT2N$JhƚEa96YLg!LkTRp0xAmJUΪrM[o"Zzה@̤琁 |^3FNr*C0FC:)AhL=Hc|%zr b6-o^a:ϔ+kPMG9b+_-d0%*x љ0 l{F[Si I^kpyY5X`4vfc\7q_ ySk>sGk8n0JxKDM:3  -s meSDKhLT@=BHyxJ$5?N( j=a5X_ۨGdX76T2'˅ YW*ϕ+"%r 4B~E^g<2apaB !%p !Ei("#t50 =Tm),&A2aQ=+NFp `NiuƒU%Xndjj |,$LA IOjt$ƀ?],Y#95jjyAKxYk8}C(1 g12 X: da 5WABl?pq4*,82o=fMfPO,) %)xJ KLqavN:rIWŪRIDt٥#phR.#!2.P,:(`r ( _c.T(];g; P GUA8Txzzq HLǏV6)g@r) p駴7{#*̦{ֆԼz:oFT z<%(xJ!`_ )%%_b 'ټ:ӶItȠDfpbEOcv hs(T}(mװ{ ݿ𾶴]{l.0.>Z}gq{t>;X_oSJiDFフ[/}P'Oؿvboj0|Yn7i7-j7ແ6Γw÷l3KG5nniݷ!cNWx癿o΂/]][ -f-tĮ|fFb0m=]kQ$I lyk;ߏI]Yك>y#h h:iػ} $QUףǮ^5YKjy V>t7pY‡t^nH^nk =. vOZH*.iYře*,@ ߟYnX?#Q><; .̹9|wRlEnE q&{P7z?Z ㌷j,jbeyo{~hw?Nlݩ*-E F_F_ݓ[y|{97P,ZI7`g|"? ̚ĕP0UPA7&L KNHTi:s6saq+Lt>n>?n5t#}- MΠGQ .xa5c(6 W 3h+S(TAOK!:)7JE @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ x$}MV+q  ܐ)7"*)7z6Yo6_:b% A6/a_Vy7 9)*)*(EE)~tBX@w7n`<~zQ?QuZ8/Tj=?n۴cPe;u<04zLaU4j}y{WV/"Ēħhmxȭn}a:2]kXĉnGnqVō7ƾ{.F>;bFN&jaJσ~a,ہhkm.iՌ쑑0΀.,հb:N%I^F+1T#t,G&'0:88HRxb@x7:lxiNF-;r*+K$x“t45RVuC=j ʸ~n6 tx[ɻo7>]7[Y71uM d;ge*|ԳvNxjv{E[‡"T6wB.~0A ;F_F79]a0MO>qɻC#f^_ٞVW>1:?0UO~_=0qtl,Oϼu{p^Tsuvun85O>| njFoV؞%S_07ߗJ)S?7ofі265w_äJ.F 5w)mAAΕ'zpIPnSffP:i]A\>m]bHH@I,XPىIz6Iz5q(Y!%*[w`5\qvnS0u3Y,1^(^՜~ ,[0'mxB-ϓRa`g`Xl8Sb˟_Ȱz@~~7щa&Il 0'-Ә{K]J}-`E!V(3,!J`,~\Y aRe2@*)gl!lVTӴY^mn+T2'QDEnD [ >`^'1wT}-]CZZ.GS ,M(bΙJyA$TIng5"%ײ҂z }HI)n~˭TgL7֐6,O7Ss "_ʈ )-]n*.$a:q8g\BITL<' :J۲N-C]Ȫs |"x4(%qj\ v9o5rlz=ߪ@pW8ƓMȺJH*˹ʨLTOIo"v?fگ9 6؍3-n\f;*zfͤoL_BbS!6BhR5LlxfY1L?օ\qd[òv8p ){xNk~&;Z=bi%=Oynj1#KNY]KO?$$YY⼀3Lᩱ!U-EA{k&;ϝ^&B|W ]K&n-sYr;/Hd{ sWūFUq0zB,#StQyG6rR=c0" \REك-QA W)$Xj`D†X3*s{R*11sPuֳR49 STTqG`,9G)4T_*BpSB]^<km:9qd챭r\+*F9Gd_&'Jn%,(S9IĪ<qV&W:(Qh5QgH Um*{j~6kPv58k^[=x#6* ]BKݞ-|fc{Dr60l80Jϝ^3g^P0o7cX,,8qj9~ٺS7TZ[5ND7G(F@n8x+mT}v9l,5+"a@>YoL6L KNHȔ[q\40dtVӍS;hy SW ")Vt MzJ}eKKQBulӾi] ڟaS>BtJ^滦]|}!gx2ϱfBp7X>S2Ĥ4q0!"%ZXS򤪆D8@lK2Fb"L63sTLLZ T1el2X}ac)[ 2k$HюD"aQƔRYr $_u OPasJLkaZ‰lMADCHbZI {= {iXb*/mǨ1!`Ɇzo}QII2$Y+!WC>oUf"6 4t61Zy,-a$iDN ]4e"pIIR]C: ;dRICXq?mhI/:Ɠja\+;"~4B <ٻ6$W/CɞKrA`?BJ!CR5Iy Xfzz{~꯱YS4CΓfocN{0|)Z")[2&k8#V` $"8UIΆR J1ugf<769 }*a005! (V%t&鹋/[\^QIci;?>3YJOp MH^}7Do'7÷r2IJ+?n9JS1k(q kUb]̡T9r=hS]:'7#Z~|(o|{;/_gX1_zE.z޷ "jO2+S:w 7i&W55Q&ŠQЃJ>(ӎٻMGo%h[lzkS[%F:N a:Gҋ9  F.U};p6vXY)tMUe _~}ۋۇoxx?| _&p(z>K>xz^z5oj[luۼ>>daI, ڞSo ]oRtdzINU^]I]>G@l7 SoMSRcY*"\_b7 >=e?=^k#=DO Zl N3)+Mp87{McT$RF:GJQ&kt֏`wb:SA GC^ -P{*kB)R'a>j U$T?`n∸ Mr-#i E:r]N=/]dir8MbCT]:3ۗ?oWa~w!೻Pd(;X|(GϲɥgO_}ߋbpW4D&W]Y?vvG};k{1ٕ}(G9V} ! mo\l;u o߇ۇor26TP o`ʙ<{L佲8GGVhFkdAZEDP¤U x$ )[X1c2b=6M \Z_!=[T!L8oYg2z޾pRY'Y٥, V*I[.6gV30E2 HD@I !(@#2\*-ȔQ[ElH%1QX"Xnj'◔ryuY9#AQq&ۤq+vTΑÎws(TW=׾jVT+֌sy3sz.Rg^T>} JgArޔ?߽|{Ȼ7?Õf6BWOf0]׿bߎ{RWIIr&0/J YzKf^γrA| Joٛʓڱop!-ELުZ}k.[Y]u[{] vvg?Wʧߕg/^1}CMf6<3J ?UsU,ePvye0{XgGϽqF;zr\rlFIl#G6J!۠LL )bpum<{ixb4PJ# m:ALG* \#"*D LMAہE2+8EPs+=l~a< aa2fF=px cgƘSZBOȘKsz2\WS1撴R1T3^1vfFQthuD:zRT]=QZM&%FC۫lva=NhRg>8L*yԿW2)YH3bEBkwI )$'hKq(Z±+$%Ɲ -2vB fJ0lO]%i:vuRrL:u+z@`ɨ$8tUURN]}J8 ŧ6U=z_<[4w qAr29=G4{͢ml|>N 1<ϑ>'#q$OnҴ#jZ:#jZ:#jZȓyWS @`Nɸ[!Ic/w+$)Ei5lJuRT+JuRG+Ju38o:^Wx:^WzŻLq;#jZ:#jZ:tT#AS{CGg. b3붻AI\Ik>)$uD9hǜ M1! #X(aSk0B]orU_+O*n<(Vt4a(k]PQ&T=nzbZ҃R+zj/߮B6āUKoC5gwW&P.w"Y=|:ԂcW%'v+Pc?6Gq^x 7ޙnlj{1-d>rvZpnUmZXhNjmZdj iU.&`oN6)s*gҋNב֭A cGXǎⰗfLhB("[%*m2,@jr ,*/rk>8 AHRVa S띱Vc&ye4zl5yBZ"Z&Ζn~ b2zMP.wm\OW:W[][T84o8v?< q33zPu o_`8)^kBmNٕ[$ Tń񚽮P:eS*zbY~ϮͲB(CɼW{)eHӡO t*pIZ).Iyd;~~E{L!oW^?ö` kU΂#"AsyÞŠ(412!=[ Ja)FJ)Cs]I ]ACO8 y !ƉQ.DpCLDK ah6֚8c@'1]>7O-'=SK&cy 1 ͤ:ht*?{WƑ /;R0Y'6qCjgr$%Ґ8(D8隞ꪧ&uyTR& 2QE"ol~!54~qȚGQOU}ˊuw}[1єIB#g5ٮ3Td٨b|f2E-ڊRT֟ arvBn>*J j-v[YBJpWY/yM ug\+X*Q]K$qo}_N6pO5U TEY es$%(h8QVuC,aGɝΡ9jlnx s=,MtP+;-D#&&,RD6=kYd}ڽqcYk뢵M$x@#EKRZAas 70Cd<oE3xO=mf$T!EuMfBu~|L2N5{#g>lijV{jDӳF4E#؇@&)ZU> L2T\GԾEꋤI{I{Eս0UD%,9gA_Y׶cP'W>X }eѕ:#6O$}w..{]EݹǸEw=%Ԏr^zt<`e"[JEhC/M?B"![Ǖ''sJ5\[~Ua5S`w^t<99x-.q}P$$K\L>4q }kIJ-6}~V]ð(r~х\]?u{i>ߘR ~pуf_po+o ]^Y4U7\{>i4<9-7k(BV|yр9u k&;gκa]0E41x(\y'GWo /);ged욵Y5m62'>LCe+ rDdv ǚRWT!Of OOxwo1e{:Q.;I0[Vmx}MͧFl05'}>u|y[vF8?w9jԜC]a6A b~6QփyasCߗb(pbІx!!=$nZ#mO8^ٲ9UMRPNsӬ 'sch.q7Ix4Uq9uWpe2c ڲo;SW4 )nΡ(Xxȫ\ \Wrĵސ=utdF=ÓbRD6D"+MC*C xBrUI̓^L|#Oߣ{}ī X؀%YLx /r߹iYWQVU mJ8+j zzD]zjiwO^:M>dP!jq}Լe+`Z#,TQYV  Hs.()N(ݏ޺K<-[sOEt\3G\T4W J'C.XR%*bb\䵨nr>_y=+pD5>L&&hd.H0ct)rRfj.JJ¯KbZs̠NqIo~|e.Wxq\-<ǼwM(v.WShEK:2"dhP*N&3xj:?9^}g  aLfMXErFP'r D2)YOHS6Id%|PŵܤFkYD#\+hFu0ܻdEN%FZkK=hk"m֒]8smtsDytiCTIZGZ*a7R&P)@Bע$$ʽQ()1(R[ ANB&S##KT*g8P 89x9M Z> Gǖ5}q0m.7|} FE~;s7E2g4YF|jðq[mj?;>ۿR^͇3i Q`tY"7{)7TUQhH$Ȉ0KmԚ3ęx=bȕ$Yx5kfХxֈX}, ժ`C?vֳDLY] Y  DbH,"@X ܔX K!qH,E0`X@$X +"@$X  DbH,VTH,"@$X  DbH,T{ j8y1gI9^3~p473m2k4_6 !~O޺e¥G?dz_&(h~E]_Na;r5﷣ rPG,ܝQ 7rlʷȹBx&;,m&3~0x =F$z;+{l{-u$׮nAnhs2׍)1U{p<`We鸞)W8/?f'd.dm᪣rjr9&|:zy:h7:-^F^4ߢvӣۿV3(!:Vi_Yit[fݕAlf/PEI2wBP8Ohؾl;XVGܣ">po,:DC DNu\('st'TEf# ɑ]qMeD.4.@*mg<T&FU)ZEٰn`dJ­躜*o/Ofܝ"Wϗzx-.q&`IHx&1`jgCF;pE!d̡I/:~Pew}6q ИgԆit҆ 'Bk4E:CAe-^Pyl 0vW<MĘ$C>L$y $P:&h(!*&}5ekziA'Z9pI0Ky ȴ`E\2J)uJNJ +H9:aRn~.ú &sBąCM _GqjaGo.F \ &X$j7 {>AYsW9S} ̯4~C>8Kd9%Ag h95a][oG+$f_Q' $@y8*sM )_!ˈl4AlKaw뫪!xu< ΊN[lRkr J*D oaBzr# ./. D>lv~|aAR9N?Td2K0?LWIӛG%QbR'[C#'4GH]ܩRa}qoL_+so?||pT#w| bw[a(6Sf0~Z֓K{b|qOW˺!˺!47[`DJF YL'pпNUf|iu:e.HY SIs>\jW,bX;KX(tEĻޠ~\؎ۋ7?&/^y{8ϯf&88N&ᷭ Gtsu M~8Ʉ~iZ1_?}}=?.mN4*guߝ{IJz ? l~UzMWmWfx :.2lB%g[/`)7奖~J${\Qn33UyvGAK\rAY&Dd𠙌 X|3JcLޒ(joFI*N/ 5-yb R LE N&xsEzo =и=Q"Pv 34 )(OX̂U!]: 2OۨH  {x/UXh!.e HDD'BDQ,,qۛh*őp.M#K6)K"2]IU>B6xј {,ȎTnWwyG*qJqPۤq+vTΑÎB;P͏>k^{хZh[3+bcPgܓ93$qvstQf?(O˛H%`)bR%c6()$ŤԶ޹I O^ {/kM y2ڍYt%@'D t7Y[cR;ĬW逰  )p1&;hhyB ң/,Hom|!M:;S{^@rQH̫=w!ӧd E_| N/p"PEko$WR;oYڼ| %BO\0Nǝl 7GJ8_ VnkA{Ozxv~4JTvMSd<sSAy,-dJRU2S\a::zuRe3틖68͘ЄPD,JFU4Z#%pOe"' E4p SdLG eZ30{-#cԀ'%" .9-l~趺ċX1v}:xjKJ{ޢ5h:UeԪͤ?N5MONMo6'vN;sD$.䖨+MRp'@q1xk)MK Z:JfAw/(R6m0W7}d߷AlpxꚆ WubIWikzElkN?}m`4m+T凶^:~<I8d]}tHH:l'lN(4;6ղD1=ƙg 10+t6FheVv A<8H wNk,668r:' D"yd%N( u4D{Sܨz̝vX-" #3xJǭa j8A 3l!t@s1d ǫUւWUu}j5\;55*WڱTb:}u_FJQTΐȭ3WCD-; XJUzRh~VGe=K▱(.s$ffˀY,CL%JuXTUÍuyt a&&VQ2eezƌƁZ1b"iZ+I >q p}gҽΝ5$` yyD6N鞐)='\\0s]s\^r.QK-9+RTN{^B"iwyEhu[xEm%GiRIpT{r?lU*D(*Y0R`D$4*/o,B(ң@yP S#*_%T8 yӍv))8%8%b"b΂wDðq$sXF6rևeL 8ݴx2qeR2fy.2݉/_=uƘ|3[P$pht*Z"'T8fĘb Onc,;ޞIΡh*m|eUO{,o;||1Dbbmk0.qؤ^2VzR4R+)u)ˇ҂O,۵Ls^ݸhwkEcvp`KfqrXoɇc*ZOX6E( e?GLP@c8:i̪h,5>[x̛!! 0E"GEgO> HB9UaaR"rgEz *1*a4f'mV9nn?˝x$_ǧ0y„&כL y/WZziCy G7XS?$v,dJF(Ez | .]K>jK ^Ȍ ^`A Ɯ[2f#EZ^da,e!dNe|P7x~2v&O~Bi~>P(1c(849@_ I EBsxF[ϩP B*LCD%$ItYRe0/#=t$LEt!"'Uf.EjgJmYjNjw7DF()HmSK4;D$uP8e-FDB3@ю  D9#(XG@u>,^V~MMP$bDT%$b'W#7ⵎR#/DS)X؆uRZMs0BB2Ɓ3uS/IXobA% 9@҄Ij$f.]Ν(9-BrBa%,9h6ߜݷȘ(GvUT*P9G+,JJq,yz/R"z6yp˥'2գTY\=\gq8j܏zG+܉~8p@ 0װ 2@nw TSbZc~vim)h%F (п}7)6YQ;8ƽjz>FI_>IPY ow,ʚP L,-yF:H*0V֯Wf9AZ[yQ[)40V ubSش~ VgՖ#:lrfjUu<*Ti*u+U6,9q {y*w rBjgU0+XP*PA#E=u;`蟆_UyX#rNTµlkrmVfM/-~~ :yȊux%B>0=`VIJK7ᣥDzJ&K|H鵝4%H"DiN.mg͑JfW+-fuzpmFJ2s~\h3IΊ˥g|]j T6iD.qdiD3]-er|YS}"ZW!s&1L~`mJx7w3Q+ġ{7Jw|"wwQ+J8H-3X&zG>ӀH@Wkt{X% 0R|x6Glm4l.fM9M E40ƌjm3ܓSӦb\#B1@hPL"WM&P(IT6:rP ŒFlݵfh97d+rtpyV7nT}Ɠqz1}1ԯz>,wp !y!'+hi #t4b:鱈DMcۋD%#~bZK֨Y pH%Q ~AUPG uTBEGa/pfƕ"1*Qը}|;p56/0}ɮx[g˵J_tC;p6}fqiNh\OLBv[V\re+6,9"oUn9bc~L' q4&L&QK#D%y3<"q}< ȥe{ȑWimއc̾of6b `YҨ$އ[Cm$m[v:&EdW:ċjήJdW2DqgG@?ǓIJSAV<)cLAIKjA׬`Z(J X3&KK᳉Zʥ5o +iQ yz |^Rs[+XLAi^wfP-}Λl+<2[#w!8˭48XFB\`8UP x#3}>ZYOy^Q6P=ŸVsMZ9$ Kќ .J/z IxݸaZg5-1g/_b)6Z#@AaÛG^;`B`Va!DçPH~w~ӷl֛Тen? m"^kn`7߃ȎRLwgIc-=|WlX, ^/Gk<ξreH!Y:̲f8#Ͳl[0?H1^^2ț{BZ)R$|Ԛ~3,v#[/}ΌJF(mczpa\FY8Ua'/҇G[#}VNu*d:| V*Zrj6Lhn-r. eaR5c3rt#=lwHt'J.frw7-HhTFyZnh7ݒ!jd w;p;jZRe0ETIFa4.yԸJNhxLtsp9 [{%h} zmHyE8 IsX>OpW%}>ٸYF8- X &OO|ûoNwo~x:}߽S/0*f+ $ CN{tͻkmuaQM!{w#hn~OW_ߌ%6JWfSs13##@6ly1W7QRT ]VYb`݀@^̦pie2M4q? )3ݐncqF픍&fܛ6`dk~v1]d*V,)E|ߪ<*(g߶Pg`+hhR ¦ S`^JiA#4*=5! uOis^n>pm܂Qb 1yxW*J_,8nAp9J3(/:+۶Ďmˆ]ض`{U[aZ֠ [=҂wy)4̒9yVW^rÜ93ln%9 ,}dve[o0]:ڒfUσG}g!ړ"8JfHVAC1= &NeLkG E]GudXDMkZ _%k nsW_uHk^CxSh;o=>{2BE5 (:_AtPٚ`6E2=5:_ ʄe]ࣇ11v1/;#:PzM/\ȵ$kr$ˍ1'q尊9z}K"6PkxZ94U2ecD>cdz%3%ZJն3Y5C7ڻt>7W:g6غG$2{Ÿ1 *gQ aB%9Kaj% rh5ZL̨3hQ9`) N,2ټW};dzfb7/{H-2E)Q+z$KRflSl\.֒zF nHT=*slt-L>f74]7=6G+tcZ[D%J"'&͹'.Tqd r$OJv:w6mfMH״TszHjͩuŚvn=T7ZK{yK^?_ۆ hOOmEۋg$OqKRqRdIreg9 up&I@kpOo |N֚5Od/)>^E<xA3_ư/><-a䲧x>.aT%0}QR&R/ ~K̕'_ǕP|Zr([C7ѰCB3]jnqKItΪƏëyQMi6)3騨1t}V*s3{;ftNd:+B8+뼊5Ϝ'_ݰ"C#r)]ș4<׆uT {KI\(NHB pƂac5\G`9M qnRKrۜ:=V)nT l=N;[B7zfOL0 X '(xAc:٬S>ny as+ Rz'ִץr3~z&Y85 II8tV0 0~Kif%pgJϷ;8{\@ޫKKQHl55!"A3?bDp%Jq1q]o9WM f3 &{_60(ںȒGx_%ٲ-َza~tlVbnw쒗Eff8d׌ 8˥feGJ$…zJW܏;ܻgA?^i+v_&=2̕:p&E`䴡К5"9Yp A쏦Gc_dl\!hm\tEd%u:kJG1a ӹW ԛ 9imC,`bVXNUƇEM)WZױ\b,QҸ@ e6cJ͹噻bGEG NW#㬘˞"?xѺM m]Rįps<(0.6JOsچ|))s㼵L5/A%BzҺ}r[r۴rk=:j) Ha!'Jvt؆ۛd/`K\EfޗԮI3'3JmC>}>;␿wrh728#ܝQJK-]$"+,P4XjQ ONH*Hz,7*>_6үT{WT+󭬙J+B'$1 pZwcv*i R(kG}EZFi rk !J ڊ^y瀳9:!el\#f9\;/Hom(3|z;z8-7ÕGO?~ ̀T3w '_.dlRvXMqWraU\ d& BEܻqaQ+c͛ݹ+]l~S;snebcI__gE]jdE<R?}rAA?a"y>Qx&8kd%σBM^8uYSB/"(ZuP)l:y+qWQ<94&ֵjr+ )[>z/5CpJZfq* U)z[+gV]2%iGP\3rdxu&i#P&GS7s܁E٭er%ikS.`,&eDlLk+Ij*bז3q@s'Jى-ld<mpZy N>bVwσ9l !Y$Kr4 ieТS n(OAD1M2hYH;1벐9bf؎-vgoh}ڝZmӱ6jW{tH ŨPR 2s'0Ău` ۺɊ)$!EG@C&!ZQp$E|HY%ɩFÎago[~Z Xc-"jEsɹl<9K&Z>[B r`99}"љn,XΘ6|cTpHF\LH!Z576FKQ"oeRE)9 edՖT[jx{}Pk?gV󮌩*T9%6ϨggSԓĵ KQ",R.w֢LQO3&VޯUy*W_|h4Tޯ*W_~Uޯ *WEx~UޯUy*W_~ TޯUy*W_~UޯUy*WWy*W_VޯUy*W~UޯUyE_~U[Ry*W_ʁR]ߋ_LJmfTPO͐GO6 ɤFzi1e7y}*ϐd?m4Sf%-mrl(K ֚G%{VQ!XBb5z Б {ԝL8 e \JP IR(:PlO&qYL]=7,Iz~Ɩfpq~\W֭Abỹr}'9M|H'zٱ TT)2BY z'ъXYb3T%(=:G_tIaP颱Ɨa01 Ecdc2['Utp9zRHȟO^(8mdYG~)ڵ>d{fly.䐴R"ƌڤl4O.0-.8A#8\ c55=hGJ Rp4G8"p`0U>ZP+tUnQ{]k B/y{p ~=R ϞS.?4 #>7W|%"rsKH<?2L|Ŀ.1I߳?n\gS&Kx?O;J<`C3Y_-%gRAO&x)f~nj^ɥ౤+CŕZ.M!xhGepZ\읟F>a?Ïx;!oFXANo?8 ʁ|1~NsIb]AvHwWf0#چ~yמ|z2?vFsFAdַv)PDCmz{⇗;غmI-kFkF0kY,IG,XfppUt VCkuݳΜluRVp#aEq,>ʽ?7-CܰcM9;niLft/Nh8??~(_wqawѬ')BVd~{)}p|W4j[4 V>içwCڅ.yMG,MCۏsk@z?nObX>?ej$_@W1Kؤ槳" Iou7]xZb C< PV&_ߺGz 7zMVT3ʹYy;M1\<.蛻]Ge ʭ/FTbւƉ ֏o$m;uT% Sщ@J<ԯkSxuYU?1;B"`(bwe_bT܍,=9WMYuE N%1Lć.2{cU'* &^qvwՍ. X+0`ͷMU!`^^Li#N?~w.CuAU2t#6ҳD2} : ]%{ 6`KeVY ـ59`, )4FʶMB5|1l˒,[9PRG }g3v%K(M}2&<OZqR ) x@h>:s:]Q`'(0M^%ߘMISʤoo}9t\ v=ݷmQ! %b pzT:h6i$87%Ujy5۬X.y:Oޟg>OG㋧ 6//#R쏿n1>I\`f?MZai[&\ځSҋV2̀$+F2s5q.qmի3\sE-{tl_@?:JC^|믝-:[HsWUMikm]=/Cߴ c_n_4xQٮr|<̌kj{Z6(2=ڛI 4=RRA_R*?^_Vw_y垫5Ǔ'5liXy.6HkdFl6[r[0m8RW!.!hki{9zBIkj(Ja3pzr\J3N /z mb:2}fco3q(5>64|x!29g}fN|>vӘ>=]TnθxDާd| %\ #EOC3rt,8!ZN58:r;sx-:.!(,Yn*j({rd]v&ghe_{U]wy)f^Q /{OfQ^mѱg?Rgh8hhI]G3l V?#O)8 )Jt۫+ރO蚑 iLP:$8B4(Qx2S/cYbzЛd.cwWc.ۗ>o}s8,6~%~<50bL;#dNNyD%G (]VF ZX2øjkm#E6>3m4vz^o#IgJ%ۺآSD]Ū:\^gfs}( @$\D:2%F78V,Tu,Z4`Z t@z\bDJ wkZ5qvC!}*W+k67e huN|>|//(ỵA-ߦflx|>Ͳ#^k+Oe@zH4z08&Sghy*!tv)c# ڐxb:ڈ:iO:R[oDEŕᵔRAb^g%*AǐuTTA]qt!h8޽0r2N_E|,ׁ䄎p,]ŔaZ_M{7J*:ؠġ]B=ն.naGY/FoA'4|NW#c<8a!i@g1\$ؘz<nkMPm8i%Y.ەN:2 b\)"b\il2q6{]Z4_.zn9w'}EjQ:jSmęWRX2:!r4ǁ'V76_ׯo_^ x{8 *OdXL0dKNȵE9y;^+#0W9'^u̮cyaꄃJ%~B]C*6p쥤Gd`//=*sc4 { ЧP(٣RJfjɸҊ`rV8eΪ E8m,hYNVrnm$BE״hy嘒!ughKxMU.𽧕5:tYbRyɛ(xƝLJXu]eK(Ƞ)4!Fm frC+]zHW KTTVQK2m#/C[HBbNjK>:JU;ހ.+&!Sde 4eIVVR>YZ0Xt,$mdJQ8O e$6mAiYs3A$% 85tG:6K]ߞ2?uWMˁr e%3{Ʒmh<V&[ 17 2d\Yhd҂Ðڂ\k %+Jn@RɓIYX!&b@9a 灡2w< 4 ڨ" ٨hlr獯*PM=hV|;f{c-- 9F:9ߒ ddg#߾*4sr6vcG w>.!Cht-'DͬBUđbn/4!Œ6W(ؒBp^z|2ćUOɉXe{;c &Gm} ͮݏۧ4-C8h<-s@}&+jWfo4#֯NnQ0w毓ϓg"ćn*onIJc}>*{,w݌ŭ`[+?L녘PVt]Wd\N%I^!U'jU zZ"* x>e^jQlˊ pD^7NɥdÌ7ܦ=rcp[8dՙْ+AU^&Ξ6bUZBv.aj"q]xs3gᏋivǭ^^\?إ;U.KtloI / XM׺bb7}zm{ηk$XyPMmqǘN~ cKwxQ-6YHVz\龎PߴyC֡C[lrl(i-]%r~A]mk.*F:2cuHd;XqF Y=xH7=7ut##HgF 2kB|Q_' dF!U{.f PJLhT06;t}2M*sBFg}D2rHP#DlWgL-.@7O3Я\.[XlYǂe-{ׁ֠z;M]'0* U4X[̿uY!H0"b傳g]/tOnL]qAlF}tvz~=Kbä.]KҮT\NWd `gGiz6w0ϣnz2ވ ]{;8>݈z9~%\t?Oϕ NOJ]g"-=\)p(uZpUd=!*rw*R 6WdՊ!b(7+okvoHFlx3oߌfӫi$k?^Ru54ZÑf۫eOI`dp20]s*0]Ew.RJ`i܂>!*?"X-*⪓Y+ҚUp#•ܜ \q \i;\)Ͱ#•B&Luo.Bk2(ۯ?J2·g%qԈM )7ljVHmhůYZ)ǣ<ec"iA}&$y:*`bnwuLWyF~/No~Q'w%;R"udpg7_qY^ީ/y#@1 dY$KI "lf)lԒ ,6R'trp}F&ynf /2RL%rg Cx!~X S2J3kt%.0U2ho>%lS1vAcGmgu^DWNhMX f/%(J=w`|ֹ:t ,%ZZ!:Ia!Ķt\bѠe\IA9,N¹#͙7Ik/{۲zArY;U ?՗": DXq*; "P}@,R6 DcGLwmw_ MYH7DkMɉ p 4JV ļlʂ3,3LB@潆b8 t1Ǵ upM 1NEK4`y.@jt2\{ S CTN~QM/g^_SS~ݐUm)8#3r[PyΉ1mwMHrvBy@$^3e %Gr.mdT&A(tR 7(o]mw&;V<0xJOk,vm~;,ǼrE{;|5l *=I~,{!`>%JOo~q-'~NwsXiOO^nhVHaC?n<$}>$O\IB/YV9/@6^ET.h&;/Cq/u cIsƬ\R1d6~K^~ ГhR\ȳk[E-3KShݭryQ1xqrI\)*.R Dhif[.2*s{V%11 =)$:f!Mt+8Hj 62Vg72ng)/FSPUB9`QpuYްF۝ܳ <\;kиxm|Ɵr d8s,4|Y#rg2NLqj8+ebyu0޿?\%)!rx?f8{+dMsqB Y9EM199׊sN nN3dW'q`l(PbvA(bԔgyaRq:?\zԝ^ƛؕ^%\7한tQ CAmOgߩv9o P }UNmƳ"[dLnTCݙS]WUjFg*y_]-QԪ]{i?֖۵]RVt=+C#p '{r}OWMݰnᢱ 1QyI>n =՟5O*Z^|M6USb٢iesK>M|nz+FިXce */p;?}ޞޟeޝ}xO? ̢ϱG`=_55+muͻFu3Y K)7{|ґXZ%q]^ɗN&J 2^F2ZOꨘ" _Mzm\;he}^lNzS6qFl@y vϚ. TeCkzbe peiTO"R:`Y!BN6uٜ])L\;xB;#s\1{=Rd^xo}[/k W9`Ki]z=@Mڲfރb<*6M'[5dʤ [ފ #Ex p]P^`ut[U0Ĩp p '(O@.cD :Hz JqCm`Q$$ )$b' &IL0BRRXyn] $I:ik*lF<j1>P2Tvֆrc 8k)mx1ŝ9mc_\.)ΛcHt1e` {9Z: iARjm;E$aʲB( @VXsɐDIuaBZΏC}uk JRB#Qıby}riP`+(0d'er|צ̀mZTOg- ߧlq[WI bb M40#2RG&V' xP ;i!S!:D.S2,:#5(O ʺĶ ?[ΎpOF_n^ēt<qd:Fvu+t+ w=,zJt˒?aAB0B*ŽP6@b\A$Da XbU UR,p&Zq6Ag՚1ɓ6guhLeϖ:qSx2E/Y$IJھd94,c9 otX GJNY+^R JAɑߨ)N|L!͉EK:wh_5AODEN@T$P?^Gɜ " Dv>$xz*r0&8bJEQ='Q$򉳎##\o1t ӄ1,R ]R=UOsF]4;yGZe TĕAHkaڠ5g3%CJϦǂ:߶Gk2|lXgXU@OSb@{J567ɽyDXZlS|yڗ <:1@r $h mA9%ajq8h2hd=yi"mx`:j@R[2!UqRJ"ΝBuIqV8O?_z%qz`U eNqEG䶵 ˆm}X˸y3vZ}Wε@]Z=v̾2}Lnu/~0h "$Bh' -&hyuɫW>:.sDAQD!3uC\XQ 80AlW11HgrETz멉V% ,΍rYd$Sb6Xc"ݠKYIVTYυg #ˋO]6@ ZPwsUCM!Q߷<γ Z5Z~4 Oy#q ʟoP*o^O޻KG?ūQ}?/An= >w-}7Jnne8C(GǑ^zR3z]WA>(}A>Gd]lU溕Q-bY:wDzWuQR1֫]2>?Լ7.d8kz"cj&Iwcft=Y u<}5\W}ħG[WvӆEKEI%r,@[-^5KMԽOy:k1ɤdRْLzo֬yX w%J#,`8V褐2mJN)83FZIw#'(xʺ5mAX%)h r1BIPѶC5bHPF"lӆ!-(54Q빍ROMvoNJobtk"zePw{q DQ'^}PH g|ڰ(K ) h\VJadԜv68ʅ-*x\raA<#.J &RȓPĉ[AƓ"U@7ROc:8D-̓Ax'@5 hm8;f(~+b ރ9W| }|Cyͭir;|JJpсX󽡣wC\b 綿,IY%X=)Gm" BR]U\H! "J_k`FZGPh}^Np *5΢\Zj~.bOA(y<`π|'g<B,]Nzo"\]U4}sx9}6wN\.8 CzZƍMs{_ۇQ`ҲBݵ_KʭmA[KuցZ`i`ghUXYP9WN4:.t HSA[!eNLrFd:٬SI1]$!ObT21C jc (BD Dy %p6up g͊7p6MP4[uUH5SC[Ub[/S63~-MקMƳ:>%meVE:ͣ+.r.1P|ovQn1n!W^dkm6b\[^y?lK{~4L':TniNF|mA>f`b5ӟW09ʮm/{ٹ>E2;$N6?lε QJ(!l2a*bEVNBa79-Nv˸RO H <+W+XS>GQAA*aB1 NFWգUP B$/-å:o?EaE"[K2T/z%L9,ҚI! 1X6SR$*ɭ5vLKm|Ϥ;O9ľT?O޴aS~,H^e AA=E=|ӵOO?~^+}IH}`<uu%GgT8wAW=3M`Sc9]::sEs+ V/`EWIYPR:Di8gwyYPx0Uamx62 vU &Ns+([̢88e>lߋzޟ7W8B2kbLp1¸rzn* Qn`"Ko$(VXRj+FMAפkF;Ay!FK=]kwҺu*nSRS@yBm&Z.CQy4}ɱKǀ@ZχYoGڔ7Nެ}Sa _ "dg'r=לo$e1Pt̫(G5Zx託 ST=hT!Sd2d24sM@x͸^kbBT=ȹEhfh-7Chjup T-P"K 7ES%Uʔ<4VGTpM,\(f(cd#eLL2bzDp;J@6e42#eVJ)/VSPB9`FZ`l Kl]c&iCNN&r3#U tA\k9P4g(+KXQrs(΁ebQ'Ӑ?32lr^<161aAX$)ȹd(ۂVSQ[Fm= LK]< t( 3S9@!27֢g^"I"7o$!dHjсF@bLP(8"FcT0#~ LI3TmTכ煙Yl<+ 8VC>S}U~eY~Vyy }/g%ky2;㧙Ta2 Ll ӾNW=w9yW޺kwk\8<u\vBR=.^)[޷Qo$_Lw 3l]ƽņ.kz]j?}nUwC /jZu8ͧ}z]yϧ.?Q*{eV Wcu>O/HqBӪ=5^=-{uaw׺[:! FCQZ8k+93ANțtѥl]tdzeu=gگ 9~o~3Z$?GکQٮoi`QS%$b *ͽu\)p{8*-,Od9?iԚ:<0T 3x%LZ B)ZK[@WuM+:hok*)I^ I+'JlUM*pk_d70lMϵ5=-8$zKs%PQn"F֒hKF)E|N %Q#e'zT6a&BW,.mhmV~>:i |,-"G0&*H0Dꏩ7{>N3OT6,ZZSWB-yn,# yHˆaˆa\],#'-F Y;d|rrhr騌uF].Fm(,x$,qu2 +Uojҝ7U&h5Watvo~xcxH9|7~pfrѥ$Ie-:spky#Cs#04'%K>qo% (\-qyè A|^V3]Q2\cAOQS-cT&ULK 2evc\5ٕ::77e,NՓ*Jiop(FV?@{ v{1 *5.Džl VFb A[kL=o[s V *`zȴ@#2\$2q7$F| tP:e߷6𤘇 ч$ =r\F/6PE϶ټؚx9-lIY|z )Y'z㡳2"PݩnFglGljw\qsUwj2j-w$Wi+zr+ H2I89ISA1Ik,*`$1sƂٗ %e(kHġ6 HJHZcLH,'*Mx*9 G7Qhc?#|yJ+78_]4^ls&8>/$P..q׾l:w8+`:*jA@D;!A\o }E/EKy)t k戋JRc1抣Aıb !Qj4V R`)0ԫJB+5]-ov6;.\l{-˺r[D1UuԅcbjCFףɸ&%ie`jjEѢvRR&PAARSDM myNFZ^Q(^M41d*RG&'B&S#8~.S2 @M$*.b<`]Ϧ_;t'c[)wйk4g]+SeI=cSb4B*ŃPFUO r\2I.@$o٢,K(828l@}2D5c2(%R #~Sz6,-shjkWe[{-+4MƋk/{dm;H)K4rk#CnA:?=#딆\etܷW砼Q'NR(=%40.T`ND$ $9͛ЛM8,dW׍\vVkgGF;t6eݻή_W|}>=<"DC bJE=g' lZـAe< / \g&qf9J!%59^Xe$g%Js[p(0P-W!F!H## ,Qk$2'gJ:|T !; Bib cw#~fI}Flv%pzxbX})ֺ`C,yzQyJi5M+!+iOHEq6`PM0;5L8@R!(h#Ӡ=z+-B!QZJϹR(!)a1521],-~^;e+H&5ȗO/[litzyH`X12J]0QڀuxyA$)1M4aXDNl\SsBƽlwF[aJ+  !)A8#V"So+zytToUfbk㉙mdlԐrcWuCQʁ]{x2˺0Oa4_ L1C2ڗ?~=ʚ.;[|'^N'Wodڛ#g~XZ6K"f=*%AEK")ˣTw#D|nȹu;Oij߅0|6/dA- zp-b]w۴oybߺINv! TAУsWKZAn߷|8Ƌ/jZ|:n:7Ubr" ,graۭp5VԳ(_Ł?2X uý%nM$+RSe5K>F< ut5)(,P%´ZB`޵6+bSl,j}]8,^lG̜ =E]lɣeʖ ƣݬ"*%!"h2jW, ΣrFޠ*:&`TD՞ kIIrffdIkkT&dkV쎬]nGDۡw 8aPsZX:O*2odBV1dWi=tVI 8(7kZnk}AE s9~ w|Ag\bed>ZE4+AV--$۠j9Y(fg˧Ϸ t=d5Xq2+UTf5œwVSV]kA\U8" \)4B<\U+U 7]-Q%Ab5On^%69e+dV8{`0 |po_fCMyI){!1$Uۇ_7:`  gcv0]s0]{it``b/&ah~]#:% |f+S;.b4eDjƷ)mqd4ȋk>fs.Q.Ӎ|…Å9٢s]EZPNxO[|l11Fݣ|g&gd$ʪ RǞ &*^H!.:a`$g'Ń.T+wK>ı-u.}}˛L>< `dMS~!<5 5.:醓oei'Ӈ?f[5ϊ>gm.Zx=}\iy YR::+rSF<^uEN%[,A[DW. r*ڔ!.J3imENo^qՉ?lh}hrڋqmG_:Gzn٬ow~ 2 9Q!U9mZ`&:&w&;O~#n9UT=GxIHhWWX|O^OhZI[mOS:픎f=tt?֝Or4Q,@S.r ٤-p#KtTɗ@?JAɦmXC,INQ0ld)堢YS`{--D [7ien͗ifyXF\"dEj^Ib~xC|ooRZX%^?ЉjH+ДB ƣKcE Y'W@%¨"TDM0'*RAR)Kd *FfF,U8 mc,4= WNԥ {J4NO⼙?Ѕ`up//hhtQ `N}i( J B`:}>5ElP4߆)P6h5Y]u'W8Ҙ5sƈ̜݈MaZ .O͎CQ5Fmףv`6hLRlYpRAL 4I⸭mF1M#dfRs!;ƚE*+-G( P*&Auf?lٍӠ~oYT8}cD="=rS 8kLhdEGCI%8RRR. 383ѵ5i X'!9JJ6[ 9VApRZKZƳ9{$\e~ĸ8YhZgYr .jqŭL=.O'0<<5ltgc[V\J̺lr!^1;>0wG{$^ȋ4ʾc]\:Y(y}ξ{vɉ\켑R0Id!EYiO{:,eoʜVZEQ+Mѹr(LYMѪUIX&q!s3sLqSx[(cvoFxӋ{٭Cb—٘bmR ƾYZb."&R1ԌRQ6*l))(E5 >,%x3H2rEDz9=x-zqgِpl-\S"qEYNd6FW5Gԋ͵BWV⩋zU+^=RI-P7_5\jl}_A?}\Zuk,{ vI191ڒ].^tKrԁ`Ae>Je92!)]VQS58CI~|ӗ)XH5mW|[۾w{s=]@7:Nw֞}z_;\Rfg cdV4@ާNs2 (IP0D Yc1G BiC cRdź+hWNJ[H&ibU(R] k"7D=GΆS>ZMǫݮ:]ӭ?;xu6XG/xWjG1`QdEսLC.+9MJ~E9LI$7ꁧwW#[COLt\v>CBJk%-( g* *ok*Kd{i<8ry|Cp||e8P`El'8?y0Rba#s [XD^R '{o opUjg-C%Cz)j]ny`O0_jՁ~<&x8'!d JW%7t E:%,T$$K$KXa?#}(oK,?8i5Ѵ{<8tgUiJ}^M^@nnˍd: r"/̥ѻÆ\s+]Lՠ#4eX oAQ񘴟X3&igB -hA8pz'ϝdpT#Z_/5?nS/r$P*!I&@뀣R cVERN(tɉkhq;n!kg>dC^xXuZc}O_~JL~{_YEs빴1H!:J.u:8x* m:fT]$ONG6[]W9ty*l#ܙֆǃ:E o^;2.Q": ` JP 䶩G²93u2,J5<lSV]OeI(RkfYq?Oh}B>C~iiZr;~򝟌wmI_}e~? ]8wk W=CR$!)q$8L {/tޜWh}BzMײP;+rCwkJ`4 x穒y "ӹ7$:\]w놬gg_?<.*9+=.gvB$\r%Ue/e &'>0kܧP þqO3U9r)}O/xBUQ>fKd Sb"v9NE>:RLݩÍޞGoNCW{@0Yi TJ*D oC:WibX|=LV/ST$^}3mR^F7K9Q ŤNOVOA7 5m~ZqwP9L~4y:OnFt\y_x;]O.\̃ݨ1_;EvoE0ԊIWצun7oBu5iZ[kUCe0}(zLV^ѽvMgT88NYWk][%Fj0#baKEbӯ0o\a&ER}Y*"Нs7T! 0@_۴=]@FZD >1т{1fStq F߬o@ v/6I|uHsIMT^o :SA GC^|&mJPJ'Ve!G~~XZгg"B=AN1k:ó$T77`n∸ Mr-#i EZ<ʓùF%\>Z')-_ȹ宙v9WX̵Q*g  V0 }hchchchMRB*+ʨHLG]DrI"^+=F!V1F: lp6HFZ{1[,} *0jL-/?|1rd_,%{H>kl>wtS#Xd=y6.-::JsdwU] C~8PWa4:RjsfcR+cEF z֚҃sHo=&As(kvL zl! S*i\~\`4Rېu{i ٗPd#x._`E ~:ͲoC'Z{ջT5TIum',ruy\Z55.m4&B7n]}V6XxU# 0TΤO|Z8nsۜͩVhFkdA:yDDP͒LG eZ30{-#cщ ihט8[ptKpmyV`#۪[Ip7DT%qƣje ?N EOVGe]cIȵULRIoڜńSL4yt)Qz=掍};_WMpMK{rxzkʛ̷2[^;9C7MM1DikΆG=Zz[MO_oD!Nd>o~lsɤqۼyap[E$;n#F:C#r)]\s#Qm4StypF: #] pƂac5\\묵Im<oJGCJ9ōJ9iU"bK(F[ )/hlHoL-e*|Hz>.\{r^V} ^G0\-Lm0uE6OALWW|W'^ԙz@$ @>9 ;ٹ.wIIEys#\yo~|pB>?`v p<+3O߽MZNCKvR!̋R2(Hj,,L0<+Wqfx‡'L@\(C\BZP6R ½U3t"pQUu[FHXJ0vw:~.~SIdU;?uWbZEMf6|g4JCŏel\ґ ]Nv;`4f`&jIib'p'yǀeJ%'\ob ZbN.(XniS"$rW{:G)ıH136x6g<u;@[+q 34 %C: YsA,†":UQsӐ݇ L9B4}TIgbE ]t(`șDD! z^*ҫc.P=TgaB)M*(XkGu98 A0<αcP(/zm@[4ǫAŞv/Zs3Ǔ ˹`1X>M$f^ymmI 9Zr;[0Pq8ow8`+8Q<$KN,-bKd Tǰ S8U8.>te,:0+I\5S<`s- '1beP-zQ)]Lg ZGD#%S1f4j|9#,RFXp4g5{>^;0X@]_)Ft?_(Hom(ozz`RD(7 L&q=$-mˉe[zi bvߣz=UxG]=LJJK]+֪Ǿz QqD F]%q:uURV]BuE0;tXUVCWWIJ[u Œ#}D* ѨR\},*I+顫$jTWZ̥l\|&C9Ags[ Bpu@r29=G4 ?ٯ\| zQم31QLΩ<簪 PN->"5  ̏FM'q =5trIUӯFMs:&uhUWRCˆԪQW. #}ޝdU7;\% ?_)B.Sn"ˣ˯N> GnQ9f)}9Ʀ4.)*@~zEvF&)|zcvauqLg|<A&÷f,^XjOmūj(ߩ9zjy,an,7@ hӝ\;;[y5<7@ {рmP9  OAsư%lŠ(412!=[ J eZI:ecHzHr,a;,Rg2DOLS=Jd 1Y,-,xGI4 GNԘ8/N =)s I9Omݒdϔ"YԽo;=|19V`!Pa!4UEǜ+)cJ.ϽP^x⨐6p%5mdN_E֔@ld!V4tcQplwmOp;x R.xXW cW&i :teA;_)WsdOC\n|58LdϬgïq؀&%Iܻr +eax*Z>g|s5iܖyjWo=QSi@j]Jr@P/mg}>E{i~g%q1)|neGBJ 죒2eH!'c֊G8*yYVv,?ni:Ҵס-GLEoiXDF' B&lR̗R2)R]EzБR9 RA5_+;.ORw ηZ@G>_9KniIzwR(tBuNWw\oڽ6]۽tOw[3&״'ܐ1P6R%əVÉNӅ\Lg'i`ŻFIVsG&U. #Reҽ5 +~UHQ9WGǑ).aA~IWkd1J4-EׯOgG  CmOgIث5 PvI֊?3tCޭoZ͋mFg]o.goWΨ30gE{3+E$jß.gmܷnx=4N0%[.65bc31(2e% Wˁ޴ٿ60)7 l{ˏo՜YnuZNRGJÒ/I(U_N r숟͏!pt;sCd^I>o>뇟O̧~oh,zc̣Gu3w۟>iYijo4Gj~bik~v,Akk@{?]f'^-OSӟrtjU/0Gۤ¢WY(v`"bC<l~_N>Dj#fwq?:g+tN[i㒌~ci7Yxܸ/l`VFf ʠ-|q~%m9R y]ℤGԂ5ye-rɌ oX|`3;U~HrND6D"͕Z"ȂfʑqaF/BMr|,ed⁾eoocU G) <g>8`;\M?Ñ._}X%\B^bȽW?c.REi;7s7dp  cIpI3L!`[#A 0&zL( &Oڸ8Lnpwm6ѫQK|?]C_8>frQ$naNY$ГBPWpZ: p7yhX߽ |Lu <(Id%VTP ĂgK5 @|_b @D.E%5YpEV,(`{n1!C>Wa#Zwv3u:`I[?dg/)_]yLiϿU~Ek('턔A].a\SL2MTlƤQ!LJg" Б2/U c_XE藐kZ6LئMdphxv>rB)&۫$CwGG1b9-1[7lkϣKa3Pc-K8$p+B#)74}gG?;ySDRے,1n})sK4$f*玣fMU mvM H=ijy \*(t,Fۨ5$DPs)R Xpj{xTn`;g6eg۔|8/t"Uwm-vPm#z,Q6|fviGgJi5'pr8 vJZx08-&2Sghx*h2;ly^ODGSBj뭴 " R*H#:+EI#c:y O. ܟ:}^!h8N92BłO IiYr1P"9.K?ŔawZ_M2D(}ݗ  gږ] ,ėjxQEUЋf: yg` 4gcRpԉI1s<'yf ׂ3ϋ+XOC!hM_/4tmYbs9Eߧ#X Wh`];Nwy}R:?- >q9y6 Тnޑ2*SEo:3O]]mihI8 U|%+ttCbKin_x4m,ƌҡ j/c+lI` p9fOE鑝kwYisW<{8*'zca,gl%t ase=*W$0W-^uɮW*£Ǫ/ךT.vݽz\ )}.0ǕY"ޫͼhhroa\WV{ oebؽu?_/H^t{s3OQGmRI䌣Zۥ$l7L`(%DhνOھtbJbݚZw+fO <}-9_ʭ-} S~ɧv4vx26~?ܵ~yحYTxyCQ ^ OecIC1W0? ‹L"tAXQ)Y) jd\iE09G.e 2Y"m2K)#1JՍJ*Y`uRa]TpM[/,S2Ih8=3BIF׎S;rIƣA?\C38ԄY3=`V%'"yϸ RɓIqkJ.^.כm4x4,>ĨML.M$4qt*J;*i :\yOyC]Oi$S!1'%J %3o@-{)2FgGh2ry֫y0t:vT:FBC8JI$ڦ !-K^qnx6 $N;;"+[0%k2?|ѺU?+=l:Ay3AI\y+46ٸ g\a75RB:c!l|2lJadȡ P),šSXEe"Y݌£5LL&NhefŰ@942X'-Q11QJ5_yWZw0X;}l/DPGó-;?A>wm ewvoms~૭YR]IlȔ%' Z ϐO|NCҵCn?i˳:svSy߁e2T7 8}L)6TaxK#u"gU44rc]]/A$eàh@Vܮo{;?q-ZYoms2Ѯw'1L $?!ۿj2 9TP o`ҼVePJ oq0B8[gOpV3&4!˭Fr6Su@QAM@"Λ]L`B<UX-zg՘IDk1hNj[1_n`: {]Pi4A&ٲ3̓EudX+['ɠLF[c?w&8fz> hڼs&ݚ,ꖐEkS+&I@h{Jjrb.&9"TnΘPlWy'ֶ]}曛s$[y!qvoQ=|0懃[,f)OoE8WoZ/n5;kî>]8^( V|#BkW\^?oƙz8 10t\skm^G`VC9ĝtyrN:!'] pƂac_S.uZ6HEYjI:ž7hreqc*h%{gqoR005lSr~j~+߳o>oB0\ LSf`ڍUS𪔘#DLq.`9VOǣ~Ka# fZKj1|剏6~cĢ}xݶ8Ln%^ZƎpS@Wv$@Urw?`WG[㲳/ ٤rzrއ/j2$bATla.‰&XDZԅpTQY8* Gea+hJ,ypTQY8* Ge a"rn?mbT,TRB*ZHE #8WOT4 ,HEZ=RQV`zꤢIUfB*Ր*8T8rTTR1dZB*ZRB*ZHE h!-TRB*ZHE h!=o.[9_HE^HE h!-TRB*zZxUR$t!-`RSHE h!-!*EBRт h!-TRB*EB=%GTlRB*ZHE h!-TRB*ZHE`O#ҝu{ֿfpB PfRbj[VA86߀&Dd=L$[gL>Ig-2zX]HX:JY^yMԬSZ)1z[t>N#NweJ$\6a8Jɱp!4Qj#8D SʝOdJf{P&#t{՞,G~XA5Qt3̪h,5.y-<PX,@<* s~ fAI$*Ű[)rRr]QŸƷ=@?; c}q+݊Jli^p؎isiTAV}5c4тJK/9q>h`^<4֔/ kh-JTRP\ }Ԗx95%.e&v˘dFƾPdƒµTYBKm'7j?{y-\M=k\Kd AR@p(WH#i P>ŦFV3 j-$M* H!#a:0- 92[llv8+.Xlc_-3[mYvAyP:)rhSK4;D$uP`[D☁ 3 l r#(`#Nu=l6Nbs5SǾQeXb7{nk{ͱG^:EJs뉧Sp`V cׁJa< ̛ARad(QHLd$&;cu4(&LR#1s)E !B2@W[EZSZ6J䵋 XF$46Q (c07!-TPPBI')b1rZ Q!\T!ey_Ŧ yNJ8ŏwc&@ûe0+#7=LZ+"ăfNL D"PV7z5q/D={>fDp!̡ 1Pvo]<쾷_m#/Ev(T3Uc^ToK/S/1/yEZVg`PA훪=0EUWȦV8*p6^]Wsg ?b&O~?_gݎh3W[/}͌JFʔ5(mw!#19f|r=4z$~}-9qaUpEiFcGLt6ȇ ^HjBéER*LhH8\*؄)c(,<!8z|Պl\'H-W@@jZ6=byf*/wtLWB Q#K!pV*%uXYP;O4#f܇B-:*_pa3A 7WTxGDJNFdNF03,*͸Jb3`@ LfhM޴⼳V6׺ȮZUbɲI2 Is>]zl+Vi9>հSeǚbL7T}YoW7x^~o~~}1Qoa 44)z?=H6S3湪USv_=o]9?޻CVơ(V?>};&.[_/BCG~3k !^M*7E#?oKE1`7&Dt >'Ao“6fKpXazyi2@}m ~޵1,@UL:(`Y4H)ʝoc3a8$n΀*hhZ>%F6`O%v V(I*!G~0ytHZ'x{esF ٻƑcW? }a7H'g8t,Kd{ SMR|$[E";dUfWU!H!(km$QSw=Ppލ$Q hbr,A!BT\r$<]LERUs=ʇ\_TmV-%eIN0-!hRr/d!'e,:[TMO a3Z L 䴳FFC4Hr&,o5Er*,Cb 9X"yvlwP^u6\#v{Gk_.>Eޫٯ=6=Kҳτ(!/jPsɦЂ D/(*0vRFԏ#o6Qf91ho95A'"#'N9.?y)gH";a# zy~s(<  ]CAudgn=9ij-WMTREb0iQ b@7`U2x2 IODȺ\q =0XEp)|fsHR&-iKP1@# 5AXg,CsT9 " j}'7DWE3ٻ5? ]Xw%+t$ ke=r4 O/< ||m 2B4z X'59O6 <%"FD[}TfK{-|팶4ÑK!j~u҄D$ ctJ2B48Ӄ^+5ЧSm[ӍWf]=Ff׆ ;UGQʁ]O{xu\x.W_`~brPbE?<6 [d8)*k|9I8Cj2M`^>ŝBp|ݤel.BlvS kR2I8Jek7Btig@eۗA<'gUɧ:/>x!EDAmx9 >mjn[-xA^˧E[2.P%6hw^gK_J1:?29?\ |?Ӡک r"}\d[wy8̖Nga.VӁ?SORL\e% \eq \ei :\*P+&(UW(0#h*K]ei:\e)yW8#jeZ`W3|p nԪ\5(?8,XMCu@ ~4MGX?2?Ϩ9䳑 u-ӯoLJUX`:Kea:KiF1~z8җ>qp˽X?bU3TqYQIcc%-|dGgեB"y8:'/&=D_ozM)aYw0Uͧr8Ic^oNըy]ӏ_N E"R )8S$yz2F|<_t94er/REq=NŴ9{aHž.0ŗMP@shzPZ3 0aaII$?0r'q)8ؐ|X$ȣ'u, Jh?e)Y=suDph*kرJ+:\e)•1+SpG_,'>\I*ս&yY>=<%,> /:?m:ޯۤ?}~J>`#yQ܈Z]/_M8^6op2X9d׼h/t!>9m׻d2]BFg(_'gۍۧ* YF8ıt򣽶xy^=[{rC¯&q)u Ԋgb:X(a& gGU/'BdL\%M¼{e W!{"(ASMe>J^IFkvAc3. Z~)GW6\/*Bs  N"[(S%n.!B(\PIh笹8J6&bPI1aJ'gZA'v+d rŜiהqP(I!F PX`c`ym9ӏnhﺙׇB,Y7 e1^}{esq̧?6to-&CRAsi FF Q #BdRPE1QRըU[!yRH# & xV;!Se(9 e[iM-[< tRAg!(1BĉF'm%21 t41) *M BrBc5#UڭG):4S9d_'D\ D3E P6P|]Qu|"9: -ΡAi@ڦ4GҐ$E4Q9@<~潍uf5]e~]ڇLMl+X8L E`?^L:O_(@;UBW_vB_JTu!Em  Qqw6.Z]lUpK%EBM\8.L1 ].NL@0OVw"ur)D 1PKwQ Θ GsE3b{`Y+ԁ~^;MКRSU)WeX]~~<]ĕi*B.F8\cb~!28% ⳏ7FǗɷqC6,eeytYUn?E3ih4k9Urk_+[ռm`[+ tցR֙oVl>ڠSQ*+]mȡP(s4Zө?X `:0 sJ.d!B*#'dH# qlW]7OzoFE+-l2Y~oqp}]膎WJ=\y5&xEmf4AٶDasS+=Uϭ 4i7/*!Bi97~Ԋ.BE[(e4Y՟ٗ;9CNNAZ0֣/8s{eld:$1J52^ՎFo|W:e 4ȩFL!MDA$0JhSyV;O5053;!4^sii3/ ӫmmT@TOi){{S+B_[YŔCe&K#F_34?0c YWFV H"&N "OF,ɬ`?j~^w(2aeIơ©{ټ^,MI4omfPOf=S|ϫNB3d -d V@58+"7ڑH#,).%$X W b6HU L+`ZK-jô:iCX!5@"2kɷ ̀y{` m%dU>n2M 3M2{듏JDBcҊF*lR[+$ÿ>ajzi--4>bs,-Aa LH .XD TVp*/-2)7ߊgF3nsNR4+.xۥ*`^9hI H2.uk`ޅPVe^Gl䲼Ӭ=%Կ؈557lj0L:w{Ԡ3JwBl^ Vݽw8#+C |Djt (!;ֈϰCYQERw~qFSܬщk%Wߺ$z"ewb#co/lT \PG8W)Ռ?.=SWzˌk\{{]s⨻f8.V]ceMRMc{˦4Iɧ:|vyGi)qh|nm_Nuыg˽f'o:]̚m6[AnGqM8GǫR1E.,B }on]饸µ>}t{0^ٲ9URQNSi}m~з1]MUj\m@.~N-o۩s VU'\ W&("k'!1rSث#K|U.8 B!]Gw#p4QI&TM'yVlLlyF GQ=ɶ[Q;xE+nxʉ.gr,:b] s1ʣQWQK'UC;*WxaphV͡C[)n DIY@dN$12ݮ4YtHb)ܙn֗=.%ehkHĦ6 HJH"6X&*Mx*M[Lۍ(K|[n>j'ۣo.ߩjgw2gQ.'yL'2}A~B$pR")" UTUUD"cG҉cŸ Bri"V`̇VkHVlupyl{9{qX9D& iL ,rgP35hga6 ,l%ܼW/Êȫw~+x8#JCgR0(9D1%b {0Pj]prhǟ:!?vUG}{ە)M`[L5F;+HE:/"޳կak>EKW aO Ug94VeF*v" ShWBG.cER t2h~7G#=sqY=>(7s65A'T>JO ǿ $FeD~Cͤ.c2=]6pYn}qx8ǡnN7}.Ax~xD*h7&:J-N'z&0' lZـy,wx:Mr JC^fdz@R#╆D%JHQ`Z$BBC"FFYj֜ITNΔt SC>! Ҋs;0L}܌\x7L?hɇ@kkK&k{DxXPO=3T-ӌ_9ĨRkc#9 vJZx!RQ tS'L)<( =|mc<:4h:%J˄P_TqƽR*s9m!)<\wa*(A+W@<'W$xP8ym1h[( c?E!Z_Lwt}gQ˦Enj01tPw2J{vcX0,d/]Ua<B#˜h&A(HA" |)P$`_#SC-5B.1s]g%3sbSa~1;9i)[DQ={?ߏ|v<:/N^_ +!0p?>41?̻¼އ/m8Tǜ^'8k٭7 19&>rO*K=Z~]r߯UNklRL gEWpO C3i%( L{7 0_ۏgܤ6lihE2%3nں}iuČ߳7reĝߍ9:"/5z|XujyVH+\Ř0khvgHװ4]-0;ٞoȱ:ki:ku[#Q狭v,6or{X{& |`rNg?.T{:+u%yeX&n]v&o|[Nٚl`SŝNuO7 Qµ.]zn饥ghQ>#t媡n@Wٜ1XKqDF{1,AYw,dF}!3 }۹h^H8`1Q /"!hNH¦Ȕ^!V;2i9=\.yzƇ8JQ9*Cg™s g^W™j#v34+<gsSzmvmIwlrbѿ{6#@_{0`{6Kֻ pC'3EjIʶrO̐( ERMdže{޼9*:/lL_rV!lcd/Gbl0hWZLʼn`ꂜǰR-YSy< 7m0g Ћ'i֠iZV²'Vm~(Tbͩ`3E]6(}7/(CZJ-"Q0Š0q;B ʩ~~;}~+pxA[A)с[;wz9_0z1ڈ9J F^`+Y)-(m{}Y.y)~mmf}'ɮG{ޭ=6ՒkU"T ݑ uR I6 éER*LhvnYPI|pVOݖy(C$KjK]ߧ n {&Q+tВ:,,(❧J3xCL>+D| ATpsQ^G*#R"%)"9̰4N+΀Bu% L}ywmKB(aaĺ  a NFE]p1kcZhEDGּCGAd뎦:6 K}(hST2h85^9YNq{|s Ү C1 /Y\(>VRs.`蛰I>6EfKe 1L(v)5㳾Q]4TFK 6(behR v($/c33;<΀ .w@0T`M@ZX.ARVp. wR' ¨X|\Ο\$.sS1f`e_Wq/z(bR~nV0S#ǠBi*S/8#$LNDZN~,ͩi]͓2RyWxu1;m^n#WX{fFq8kK.F?]$F>>5%Cm$M[Gbچ!mDY0чѸQurӫ1gY':*AGOmԶJ4Y:MNQaN#i`xR].^L"1.Ƶ :ŘTNVw6կ 78?tïpwoNw'~8D^XYpp0 /g n _bhkho>4Ulzf,&ҜSn&#RhnqVD⧋F`SKN&hŃ$t%v=R`Z W"]10w+@MQղVG.Mlqo f4U-S8;U}"n.IwYU2<(ȲiR;&?㷭 -My!P ڀ= 5X&a[i`UG`>de)Y "J)`F曲C>8%;4CXs6%Ƚ9-J2j 5(O+Lwn+۸җ92нs{f A֞yQ"53'Gʆ G )Ynp$hR* QmGQjE@&aREb? KDcl-rguJk':8Nt'fmzVJDG}gVǏjCQ*%+<#J/4) K1&*1DoJbV , 7T3>T "AH 8\8kIY@5 ${$S9-StC[_gç g\[N|cj"qp4 %l+ϛp󉟺ȵ$kJ$+$q尊ֆNNPա RӠ2o-MWra*,YK-"u-I>M,wXMì`FS3MZvFUQ[P|P& $b%KsRy5:6 R_oqK碮nޘǯ<0y1l6MqWsQBj.PmZϖ瘡C ۔]Ch~nB gM*(XkGu98 w4v@4JC賦oFmV1u2 D䜤3T&&V[p~J{?gfq:ͮytm̩dߣUVo *]lMz)xO3DEuubkا/nBtڕOpfgKDLAl/5F+YV'wͻK7GZWgS]lOi%{juϭSд="5R狲m-F|ߌ:Bk50*L4fud_N:I#r(t:&&7*AJc=KwyL( qr{&Yf -8 Tz)Du` '@ĠxjIo &icK^>&ތf_%{2#rI`G4WJ%‹v<7vc=" I(h[hyKy'M@AѵGY_tɠ߲+?/0F{UʚZWOpA,tNL ;gFFT ȦAδn.6z}sEDKݣ%OBce}[n-z biyoݧf뫋vfaEXCŻhOZݷ_`!7TN~\~P)[$vpjuFˏ^:+tb'#vʹdlb sţGs R$:]qf E [42tTar*),pt H'eqA̫+ :HS&!Zfz'9qci sr^RxS] {lv[S\9b&&HN䅷! )\J(v;u6_>4:5%/"td^~]}-h9HETm k/*oR &Ҥ&gZ"7YX8pUOU[Uc~hqڪ4OdKoP\DGA.ÎhZR g@0O4I_]P~*/FYt5{pyt-&xt3Nx!, UZ4"DQQwqi+pJL };&1a0*0c7g=c \cazsY[7ںc΂}28IpF.;Ңrlj8yi808p:ۚx$&@R2ZTy5!XdQxd a>l z>,هLˆ2"4̈1bLjO{n,R#i A{&08Xc@Pb &rl@ AEUJ}(&)MyЂ3.8"GO ͭg䛂nw!/Nf)i^L^Y^ŎP* k[*xHQKhȀu{f x[fDjiqt(i7 ,uy[B2R"nj`]րX K\Hy<50=/h|A95deQrޙj'hK&2x!m+D ?gTttute 2`*խ2ZC"WCWzæİw j+vpŁnV>2[ЕMOԴ2`[CWRmȱUFiHGW'HWh&U*XWm+Dw;BututDkX+Hy{9HBsG+ڶ(т%ewM y׸<xbYl L݆m>l;T/v  i^]TgC*&\۫pr_LpXH Eq;n"\ [̓¨Ao;^JT宎Y 7y=_\3JN tsY),[D*rBb ')Y#9aVwjMrޜm%ڥ9uOל; 'y#wid$850+s|;52sFK#/̢)D 9_kTalHEgSLa$--dfi!mJhF G&4[Z8$- X2\ٚNW%Nѕ"%T ƺh:v(u$ üë2<>x%,ᆴ$=`m\6x;6(#ɷShBVX/CAR)eA~_s|Rf ]Rkؤ؛5xy;ĉE"%R0`J+v/(%ڴxОmTd4!Z磻ݧRnz;Ms)ְl Cڠְ]j'`PFi ȹ&MGX|\> F6 ` 5ZKy[Lc72J;SM5Cm XdzpekN3}kNW%Ȏ^ ]M ë5lq`:gīA) 9؂6=NTA*>Vp9mUFYGv`x]MU ״2Z&2J.::AqǛՓ1L\qeYsrsEhk tF+ɱtFtG'Hӂi.DW [UJ QN$'Ĩ-r3\m+DlT]$]) -+Źqj]!Z4+Íf#/;yqvM¡AOQ@'F׆],SA  X6ᮕG-xXu$!38( dyY ¥u1"P<;~` Շ]/zc YϞQ;h` j6M36{+<ճ=3k yavd͓#v"!9sxf$AAZA 便h\%ZA˅vpeVV?~9_8*(lyQ"ܼ<%ンÙ-!}x( *uyoq}[ObMP(v %6>-^>Yt#zwc%V5r\õY f**lbB|?I!_b X2`wd ,&0Ru%Zv9b$˶ZR-'Nw,Y)cy+R~ȿi[}wTqU̪7 (|dpA+_Fzh'⫼HMoNbao\ʼ; N)e_z3g&GWӗ\. k?,D =[Ի@KزYqfV-EXQUD8%:ꁘߨ>cLTJ?' )6n,.{6YZAݭsk}n-ۃ1]MNQo6K[ms?@2D'R(B8K{E锨G#,Jrk)vpOZ8Yj<'^,&|?ĺԟ'4w 1u2)b1S3m /fCy+P޻*/ |#ġ jVHՐҨMod)a*@u̓"gdG B&S#8hS2 m"j"Q_C}_ɱAƌƟ8.g[ww[@?䠌-(I 1 DV0KTh$|Δt Ǵ#K8KNPX<ў`K} fxhBYں_ QA-;ʜPN(]TU?u|;[S_۾jm5%7򿻐=\Rq0/rjq=$_L!s E X;ڷs DvhkWeŽ%D&b-*)q #IDzi.s^"`(K2Rew!˱'g3jC8ɈcL[7ų@7__n_78ºfwcc}SK)/N$)98WJVEnD%.UhCgˮ1ڔdz5¬8&bzٽ %ej=*C>\V[ʛ=_QK=pD*ʔ<^i IpY:^>'hm(-'~J[z6 zQB8 +.9eC~Hqң dDX<϶?t3_[ a;!FeFHKH0@T{R$qU ZXSrh32Fb^!L63s *"L=LK6oM-5"3}R:,I?8bש7*2^iZrwsN"IHx& rM`5T !rD8¢RYA`Kc/,v3MGM\X6L6T82X) *(. O޼(Z}܏0:c<ดU>n3M 3M2{듏JDBcҊF*lR[+$ÿnjziEHNU@h)Y$[֏YSDuI .ykʤ~)5F~~~sV:WA]-L0{D>´7\7 &i.|ee(.eLe~=%ԿXxvwLfW#' )h.G8T4%Vsk@|0/C^V?yvDVϠ+aWPlT|~ "*v_*ȭ 2!1n@/S^bz'p-GLAv;'JUg'y.pTkWD&%8m^Qϟc;`xRCʌ_`C!2I+_@!|$h|ސK7VvJ^%1c4m9ӆ캼sLfԴWu1LOd(,C|?폧7Wn\ H / ? ۥo'/zRGK7F"$Y,dN$1r [i3` ytN..%e5\A$IbI[UI$%d3IG\ OmmM:;.76imËGx9r|uqNQb_<.gv>Vg$A'>,iayZdx{6"PDeY!£),"ιdH:gZ~ta2T : k戋JRc1#tb !Qj4Vu(Ʃe -S2 ^+[Y${o}O/ow ࣃO19'Z> 8N uݻ\$ O[QVB- g!,i5{g?;ySDM mQyNF94$$ʽQhN>a Gd*RG&V~bq89|pxҿ UcY:zNy!jzQ? [#q.F#R<:A'9ᅠgC3P"[bU wVTY9]3`epŠաZ3&yB ,YWo༼h4MM!v{V:.jNQi|"f) f2ybfz@z+$7Y٦j/>_SG^q]K?;'*w %"J;Ol|H^T)Q\KC ҳj"#΂b,8)ۋf /T VBExo嚟N^t͌xoz8VafnuT}er٫ ?yW%W0Q:~gz5= zy۫MU<=|~"%IGcft5Y p3K퇙RsVSGƲ%xAБ!8A:˝;u{s{=_vL}7wAOD'^{|*0'I"9pDvnCI7ǠU_GΟ[WD'PV{K> DR~bVF 3.Gw߸щQM+ƅLCU m M035 OKg4^4|kx:4h:%gZ{ʑ_evsmE6ݽit|( %$,o+YeʖHRd,"NY&Egk)b@BO3iC )|m7[6irY1.''Jckiީz#Μz*Kur޽S>}6n Wus V dGr2X`|lvbbz; ŏ-?~Qa`f&̹ĖLZg7.:C~qU )>khN2%\k񪊚@e1:e n!ʕ~7yۈ ~8t3ƽ}WS˺PF!__y9@zaiZ.Ќ±.p%2}Uk{ock$jmқkb:쯧 6 0 ܳH9O T+-yzoH= o[cVjf:{m[͚l`]ŃNs9K7 Q´?]ztюJKgQ4RFQ:rUS7q+ld %8Oi 8A9ˌ\f av *ǂ$CHJD1)A;pJG6E'^g`Hr<;x)aW¸u :X22#31Hc+*\+I7i.~@0DHBEuJh'+&%(asQࢄ㙃 #!XK3cR{D1RzE"3A,k䘍Ԙ`39/T謏 } F9F2QTgZ/iovà mz>fruGONՕ_]%@^Z-3UR1>>]zOZ[)|.[1T_ |o85 ubu5|!O[:4:/у]_Jxe7<tE8^;Kwի|%w:]Xϝchط_Jyk8Z=Qa"Z2v-e=sxMwC ~M§ϠL[.\ͲMg2a|&ĥtU8hN%}><^VV{_,jf8eΩSBV5ZB5R;HY$Xǜ6iD "ff F'( Ds3[!= KC$EcӓLM΢"Rg3 qN*Gd5e;W9gIzh- cj`Uڊ |s♻M,)cuUo>4:x&= Z b8<MyB }|}:I0(ܘ9E(^_b+ѝi7OrsAgy;B:Y6r#^}n><^Vćg'?ƿ?Qql~n5mښhiYbPi~Tl< /M%A FgqO#NDsX'"ܢq¼F'Lr/5d:|"b; !@ 1P ༷T,NH]\E^iT3blyo&^qذf3mC34q ϷpiZnLf7TW=uF>%!׊$U.ͬy!ewKWM邜Ю;lvu{%x5ϕ\?FO7snoA9a7T܆R{k&fSsim3VmKz\޴ 8B-mL9#D .b_  FT>X }eѕe&HLbΎzM3~D/v<@97 HP:SE D =as'T1\l}7įx- 5+|dI1jpVa1Dn#GXR\*KpPcl*0%av1T82Xs(Pp@qWlxXta% 4e0ϽX&Ƅ&G%e"!1iEs*lRxr{\#XCKQ1K52@Z4%\:=ł-a?. i cs,-A,|?OL ]4%"^$TLUY:C??cąuT6fКGD :E 9ij s4W_3q͟Ww\O곧/IڈO; 7KW+vpdLwob!c$lU kM(լk# sB`tv #Bҝx>f_ߚ'o&]av^Ar{;sE8\Mj'wn0> BVdʖUͰU0f'G\ZG6~]7Ӂ߶ٻMVw)WV\ꪾjجqȀ K>? G!W}5F:DΰXcu5*o* ǝлW\'S̜+ሯ" */11!mSۨ'#Qj#d,pp̩$H' Hb)@r9I/ /SXnD$6IPDRA6#zj1!15E\ Og38ZB]U.x5ƙyED?1K]3 Jl癠PB>q0n2kC~pYi)" UTU<*Ku 4>AqqQI*b>F\q<N(]L@Ղ>${S㭯1: qZf^G\hg>˃yCgj&ga8 ,lWWo^5țWn*P_b8yF a 8IW#x<VJ; ?{z&.j(Yn!үYg.3oFT^7_ZapAޭ>]|6= φ/SS|v˝[^ZmwۯםGc_W`EMv.h0nklϝ enN:8;7ëLK/)^{W 8--e(C$ LE*bQ9 b)HӓxGkh-EZEr$ HH{E%i 7!-RGyZL0ϣRPL $NXGu)X89D1%b [Qj]pzhh^z_Z!vfV}-XySf)qe1 ;R)@'9Y A$ fDaF l٤wVTYpٻFSxV~pA/`t,EH=VzEqHjJvpwfꪯgD xfP^\/\/J ߰ӏ%o nvzȻ>w_/KzJc jX%dAϤ 2d< V>q6XXSd$Z3*(nV ;[8 o!rkB`1f[DŴ$OFke} -ztv_Q8d0KҞ%놷R #xmى8>yC@̩'Ykig"W 1 qaV6 ff qBw! r g#RP#,!Mm{~ ѿdD=k)ul>oȁS8%J=e/dQZ̮F/7Ϯ_Z"x]̿/"\j͖"_]4qߚ|y\m~L,g5/\ĻTR{Rg/;V;h35$pUيd>A'<@\(UNkx46GǖIQa`A/nZJwj u܆)ߔԵ_ٖjv&}?HZ:=w!6Ïu,B {Hsqeċߍ̿8m~5-/ +P­_#Y^;/E'ɋ|^-Gr`=okuyNuV}.FwMe9YvEjwL=hvO )o{l415=[wx)Xq%"CG亥4]//< O#K!F3U#% <%&Fb` D/O,FBNοsڭ!hF 8cނ1Q "kҁj+M) `<}W'"8EjP3ƅ'oθxQ%+OJw4F`W_LEWŖ@m1&P M%L*\s4:1UGՉrTQ1NH0:A2YK@jd3I%wx <5 T:QɄɀ6rX*D 1f:Qu 40Ugʺ6^zesvL#5߽рezBOq&J_wv嬯!kt fHwFׄ*Z.ݱb>e}SU3N~Ogzmٕ쮼2r|4Sjs~6Mu}dm-Kc ۽+wno~wz߭oc/` kCM yIτ(cw\j_7͗.Jk5w)l2f5#9ĝt}rN9!'ݜNIﭥJPY}Σ/T<xL( 80A,nc6PczjU28 A8!xbhj$#1( ;٧ؼ˧䠛yVg `5d_qѠmaoNf;_nZ0)ylԁnëxxZewVoV |E,J$ o85 5V| VN Jbͪ4Z#;K2۳.f,=eK5 ҮΥU*ȃ+W]3RV6<p)H)C Я,;4.A RpXL(17[3'>3Ỵ>!mvpaɅV/zWMWPrDJhՀrf B0.1?=d2JN.N-O(N- ũ;ΣՒd4,`8VP2m)ȃ#̻H J<\3P޺\ ,b4Hq1BIt10Z[)qv/?YБ.$e>gI.[%qtK[ROKSڙ4`p<{*f**" BU"ڨEӨKäH# oMp0 p\F"Vgw)*9 gO94t4Y,mͤ2V:=GfG]aI JJ;it Ɖ`|e3} X#8 nS#00L=AOftrq\ T;rU=1:vR:#6 Q8@2a6iIEZmx%N(ttl}S?U?}ש l slߛC j&~}+EMذ4_T&)s^-I(2#ūUK%.ϳ?\.%s.,U-_WI<]@߇SD՞#tЧk[&gA@ﻜGA\o|HY%dP/)GY^Kq\KwhC!?FO8n߫oߦ_id[\e;Hm2Y|?̯X|v핅 ~ @Vw@Z>J(!b>)AKP%Y%8 RTdL89&\EYndU ـpԒə᨞tT,F RX&SuL9T5!H"y!m5_ոD}RHGHE' !d̀>T :V[:ZZ=>82D!n"tJTăP%ynSpVWr;ޝo}kz'O鵏mP}%^u]kMcsmB/`"9|A|eۿE"ga$ys;\wr+O^ƣA/&)@e2W%Lheܾt}Keҁ$"A#s̫7NxAt&+ Z[ou8NdKڍnn#{|mɾCK6a:%x1$/ 9GH*K)7A\iIݠEEoܲ:0dy;6/'L߼UGtnRmv7t-- 864h!cZ2, 'H O7ZSzod9ϗwL(.DGASiZR Xf;``yT6]J&fCmTH`u<8˝@°8{Gm5'(Ł<\O|rzyӁ*@T?@L{t\>z*#CרZI<eYEhNP2%͟BDE*f\rc 514N)qv+j/j:qQ̚Wgڦ7'OJɦҝռN,z^%8=DhZ^_Cu+)lRvgnj8ϟˎ|Q;tr#R!Vhx:(tNEO4FENK-Ŭe-2:H.PDQ MOM .p:EwJj|iXL=L*laq-TmlQmDޮo22 {6'n&ҷo~3rnrb[DۃѵDLj 5g(+!Ars碉j)+M$7< | s8f+%2~ };blbFaU1Ma]LfӰb.6;ںփV\AR2Sh̓'Ls &^$H*CFdHzўG&Dxmp ?{W֑d PyXLy'DIT(c`(%ivSԽUNۧMuA6fá~Ix_Y|]cDt;"DZc^̲MNP1,h`wXjEZ2)6ED8c*E+}5l31T1p+K PVU#b3sXKTGt\Yg(%.*qq'pňL(R)đ&B FgLժqSቭ-bޘ}^-P#> |r# (U,ՏK֫ʋJ-ڿ- }1͏_\ iOCba4Ah˼qjdS:&zӒS`űhC }W92dLtLWC>ӌ \"a WqUo ejSw˸CÛn^ڵGngʸO4W۷3mHk=>dk=ޛ}{o'wM}غql$.ȁNd NlDN,>.6WO&HY6=1p:S ۈ8@S2̢S>ב;%%돆=2d8TAH1k gZ ъdO,i\R'6u|g'iF[>" 7W'.O5HsN,_%lL2oՍ\g] חDp9]nfi iwoo6ezUBӁ{}2̩$]Բune8n9U||^a 3}Yiƭ+!mY"jKaYwzK?b[u9 LS_C.2}UkOb ߓM~2e>t}ridV7hWw%.l\jSK4J!z)\S=G|:EEQJYm_gg{R Z5δ :;8={+oCzzjc/Xͪ_0y8^-fHooX 2(?7ip;)tnu5̹;; c];-3ɏa'Lkkurd69lͺM~{%KAJ]-8YDnWc}2I6i ޫ w$j:(U3ΙU6\&U^BazDhI{XgB6ɌiBhŬ:ȘD9ۛ+C`fk;'9Yy%>% g>C1sVZzAŤ$2 xSaCk`UZ\5jc6:1L`T bGi`3̫[yZg!w̜u!qHk2}DGAkQ8eU▕X9P՚XA)Ig6;VUc'gLuܻx< 9b'ppWN8:iP|-2_LHM,YYjani,31PR,P i&"'Zä"\-H@,QcOpٴ.&teO2S ^Hgx UGB@h5"fB5M=%D:2D{(ܡH\P]2mYf 2`46Zy9fxli115 (_xXkԁsHSQ%,i0Hsz~9XvňdyzbVGq|1@TRXKQZh u2L 9̰3뵝׬Joi軀:nSj!0px K8,m9#3= Nz-(] &W$-e*=h`&8bgP츱ڢRME (%* <@ M&jZV"!>X\ U:?n8@{  E@YR:[ n( 2pG[.c٦Lt VП''  Ӭ(aۂn2ЗhONF+<>ڽ?VA9TX] ,G4 wt6 1piA/9hP1A߅^e}t0B(k@)@h0McBUBGmhv)0F1#,PmV9HFgP!t99rFP4f,ZnkT=>pC B3viǍE x.Yb4(?z2Z;ho :"eJ|śZ TfW=Z2 &XvdFkP/j@ ipF %#pG뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺뀺si.xH: z,Hsƃ V,t@S79b"O&pvAAA|> }Sҫl~ >y6eˏ 4MoggsԪ/^\ ?_򝦫w^+εvE^ծo \PB 7W!z2( ?ŋkvQ*'( ޵q$2?Ip$g6^#NnB?%)ᐶEU H!GiLOwuOUuUO}ma+l)@w_\&4Z4P>X@,w Px௱N A( 88888888888888888888888t;{a5\ѯjf]ׇ6 WM}RU tH% XÁ-)*?PKÖ]aK_OYCJේ-=ǭKzq^de妮x;I3CF`?(?Eݬgx8ia gpD)&;Kl/SARWހ0(?GNI̛ܼ;:eΧ<|ˀY,C2tLjAħ''*XWJBR%+'Q?ĽT̽p&eNJD:EĖ6J[faI^'[^_Clp;kAR2|%gStnҜr`-.{b/vwJ"hvBk: u4pNTJ7TW"RWBs+ 3z(*ŋҽJXVW_+h^'f%SfaKmNXuVZ~(e,"Ѧ=?# 3-S.ɒCKKIadD¿n>(Aۡ"^xZd:4ߟsw6޹vw4e& @7R2ޛYw3[l o~o)э{0 +(ʄ("n"2P(%wa!鹮vHrGt?]R1K}brһO]H| :h/gLlXT5 [E}82imhvŅ{φڏ2^ft᭙\9 Y\#(ǠjEqEh>~԰ѹrnn|(Up9V'#%s= B^B+uˊ|_{oF7Wm*ɖ6 B1{D1g|Z6U&FP^x⨐6p%5Yx;6{dn)`)Sj_nreeБ2x?g„~;þC?(L«H77ư66֠*Xu$XޅC %keJ8І?>Foꊢ|yǝIzt@20eG)90j)M8D SsX w=(ٖef忪̬_U  u8x>Tjy*r{22D:G$dVEcply j:ȹ_PyLT, fՎYPF{<> 6[#0puQѪR/~hg&ѰN$zwON.{\a5c5N;g?+`vGRAhA$U^s^?h`^>4֔?$KT3vC9kltag.Bu9Eini{67cww[]Unү/~ӠL ?{ Zp5A9p-&39p*"Ոx95BJ TcSE#eHΞQk %ImR!]ż$R:",QN5vgܯp\}ڝqǶZ[veق)6QF69܀ŒHn"uviUQb ~[498f!eYK]rt\0G(1ȹ_NCdR;m5X#Fs#>/`9K5Qin=s cׁJadH &.$E-XrAWԛ\ArZMS脁Mp nQj\꣆%z}+&,x 0ScGLP%TBR "MPJCi QЭE-0Zs{X%wJZa囔`k>^no"+뾍715:r M\ngxYRA-2B*驒y "\ht=uY;%p/X30y&R}Vº  a vFE]p1S1-iZO)fޚkkD`DcфrP0$XkoFk-"N*,H V^cJ%yh?Q L'UH%STNWD%@6A9GkX '~(vn6҉ ǩ-/\i*aM煰ä~34ImlqRNȸi:ߚ_G\fw곧:qtM[ VG{_p&WZcHS%9T]Swl*\q5%xylVÀp5!IJ*D `Azms( &unb,WCo3eװZ+|S_NOO&K%QŤ>cQ+rcptv8>"E׌T9L/ͩm]41#XyUx1\^.gX^ϽA/zKEjI_^|qa!n$M׎. C CekYfw > Lv5[j1f7Y?98_;*Aw:QUFGVDHS}p\VbNO1.GzczGxצޠ]_쇗߿!?_0Qg/ݫ|+0 6$Iy'.kpkz_w547*u9v7v95%mZ(}_4bU.E/^;_GŮBsA/\EMՏ7"- Q`Clg;Z\>ukb(Q"_dp4r л d`R: a &NI./QWqg)a>9j!%ˮAm5QoO S,&%]E_d*7 4Ѹ%~ͤ>l]'ivZ A+nMhGmӔ ` HiTg§ ٥Q-JzugrYGar11{) oTpZAzQV+5y~2ջiv!X7CoSu8D.eo_Lvi'*VarQS&( 66) L~Y::m\g_Rp!mEL* m^ַmB7<_g3 7ܛp8ť4n1Ms~yn^4w g^n?Lxj) P|hTtKj7?t'pcq5礂]Gn"D†tf-\؋SL"TВIO8NdT.c؋ݱ!L~2AA|h-iD"Pdzn1Y'ppYfc87/ p&]SmD}7Ы#l+Jp)?=ޱs`U9kjFKbwm#I{7 En3deAʂe#q&)YM'dYݬ )9HUD'^( I|⬗/"2/7l ^0`[3΀HtSAIM.$TlRRDiJ-T0P-W>c"Fa@!h͙əsT;*E >Gj}i]0]px^3-ؽx7ǟg݊ cwVl)»e ns9-}G'FNk;S giKI 'x2D*[͔#S^t$x&ЋVKM6<05% UqE8 E'T/y:j&w K\G"%RGSt&O!I|C;h?V2!r-J?o'77uaV_9'?4\;t2,ms7Ws? 3=ϕrq:;_avl⡮Y*gJUD4F.psD]JQ+%ʖ QmMx!9;0kSBQP̂K; 5c:yzَ^EK)|_Ŗ5:4)f|*~$İ-+#6vtsiٟ y?g [öoFY@ 7 Zi~5QքM"O-^ 55zh~ogxV{Uc-(jL{fٖ>WۼH 104|a2ļU[&i[:6af3Ȧ>lxn^Xu7tX23]xzǡf[i7ox'T>ErE;Ćol,Ա%/SwKXK:䓟Ò:;}NPvS)?y˃,jf+@r1GK.8q eNFD cS+j[6^ޚ?/IN^?*e..7a^ńaFM-˺WOR epO/v_ٽYr!v%~zŅLc*.$쨦vc0Вxg,D,(/#?fM"1XeP`Hym` XA' 6(c; >PBI=N#F(ɂ6F.9۵/?ج@L{9}`d$T#wS۪7?U;eut6~q|D:T)TDgTjeD]E,`2,6(_kjcƯ&3ǹ*D&AhS\$Jghq?UAr<+> A,3α*q$JL%-0éroˠDe_ġx6 u^zαF#RiTGǜa:Z0hd:[qQy=}ƎjA9+26mHtRC!JI<G15vDkl}-:V͏kMC=PצmegP|%w $Zw\G(?_PJrpphK JƔBYhlvJadԜ>!r\~s| TN6W(T4 3bCk7qḀ5cZj4y81q n<); DPpHy1/|2*tTMnq{:#g#l62rgn`CAV j:uvP۪=asͱ׎^I;BbQY~ Jx[~ 8'iH䜻RXP#TtpBhF;[aSueB),oEbI\?8^#lʣPx$wX}'cxr?nɷɴ{<@Ѵdk_KέmѶ6fkZi) K gԿJ6G]S+diP}9|hENQ>~tuDGu_!0 s\2'5`e:Aȩ6Ң% JhcT21C3b NP"RH)P\:)9;n6߁m|;}*"՜luj{@X="+wpKUUiajgmvK.{ד۪ϦֳS>IgUHrB$uUrna z][vM]r3?L' n'[p#jv薎Y/\Hqi&9鮁*v-"׽i{erp e˭_6ZmJ]Ee0Dp^}o˛/~4܅Rk4hiHYGnHGg#2Q蘘*AJm;`#DJqܱm5KHAx#q4R੉u`a @Q#AԱ98Vr ts~{}3WƳj.ެl+Nk-xSoܫ]zKJLj׈iTB !(o7.B!hIq+7Y +{ey)66Ӎ` }N5Dg5pbrZ^z eT.T<2:U ԫ~~2j`FT̐xW]]*7y.hgNE[L`_'s*@,r4 aP8wy4xD$Vz4ӄbȲ@:#1'`2Stu5:r80 awttIQB*xUdZKoԆ PaE2 3jB˼P#Z}c <8|.o!1&4P%$'})*V4C+48P1HsXᆋιFe -Eŀ0QrT4 8Krd+rPBv!m JG2G J?$fM`\FJr()Y)ɣKȤ|.Qnb1q _.y\8=3<.C w74\A-&.8(I?&nY0>"`fſwO].;/ y?=!Oy7$,̾obsd!c~wjth M(j1iKBGU?+cjZuuw~naY9{J~h8.9C?\brGq^&o7].Z_,q!8T2 g0b'|g^oLjoed[l{kX5ntlYO5_,}: W*r9=Ť_^cU *_Άv7#"c 08e7Npۈ,i$9N}-[ݒ,S4A7]dŧȪ.p8y~|ۏ=ezݏ'y9Sd~})}pGZz5ojnn)K]K6[Zk߮ן.g>G'|~3dc;;߭ j.naȷURy]*ȝWV!1~@k_m> +xX"h#=6F冒ey[dB'Bk,!W.v ;4ho(=BU1=CZo7{DÓbSIZᏈ^BxωD94X2 NY_O8|;YmPlLl6 6 JUnj(1ɋAVU^ G" IBݪ4}^$«''%bL"0_eY%H*ιdH:ʺ+zBS(J]9KLK\zɶv=x5U0%vڒh*jѢr6%/&~𳇟G ?E$&\-N^@b$2&!QhH\MO?|fc+f׍nJ}W>Tz~єىĻ\ýc,c Vjɭt:Y43) oKj2eʚ@R xΨK\HQ&A^<|x7>6ku\MC"[/8ԜϬxtbT) ZޱT.,H x2D*K͔\Yx )t~zөzzit甐ڂ p~Q-T,Y)K^tP)0C*( k.$p+!*< CqPcSDi?$2?I_OUh1gw%CCۄz(*m]v`GY/PyW7/~#;'U>'`9D04:`-8LJ!7!ʦ^?^E2N315p%r(d&Oma/3'JfNaqCMwdҐ~{ 6~772AkxWwl|̝M>i8umggYf:k(Ktv?5NI-',,熷>36,>QƛTeT,eԲp~d˂.AT8HA2 Q9٣35:8]e}7,* MZ 7 -;E!V lKJ~5]/REK7oY{",B ;HQ2 '>_E3}^L[- 'u-P3G wNjL0[ [ )%ś<^"euyN5ݖ RclKej X$?r`b蠶}K2,vmYϚabfu4^0Uڞ[,l³K8[#?bd;o8\T*cPl,ѣmJ @>~/(2^m8AzTq󕝂Oo,J,ף W遖f'1S8XrT"a"X2`S)r-h"Kܫ;J5ɉ#A߈iןn6mo¼ c\WʄYs :WM'') xY +4K6i 鑓 y5\HI,DR߬/mՒdgT,`8Vp1e$S3FZIweY7I J<\ woa1DR %Y0hiRlG_筣Qx-cq>vroW)}˩'M$X'1^kA9Ā&HŀP'#bw6*jMHơI; H?Sg0cM֙TFJg_Ȍ8WXB%KZT0éEP"ė/PP-ThB].FDHr)IxM 3:98IXu{ϫz`T#ϱc>`Qܠ8Dئ 7BZAQjhs$zG19vDsl}-:VǿUsSy m Avli~B% Jy*і"US AdѸ9(Qsz;a$kV=e`N.ZQ4 3.l1-5< Mn<)‹ D qc(D@0HCԂwf]I3W4ɦ5)@Ofգ ?ݫ3Ufq]'sO??M`i./cYijן~x O]=MJ )q^Ԓ!x@ja=nE[`շ zM B߇Bc% -G mwKɷݢ.nh]Eo;Էt4k_~4רER|e7n^p4>N㏨e̘ |+зCy)XV­Ŧm"*B.*G/YrO1H^HO\(<˵<fO>@nۍBnp?oqp1 q0g7q4c oPXG VN >oͨ#<0 fq+pc8=ݝ_K{wb.-+Ŷ7 _0T5lTl6 KhUTXYP9hvWd8"?`QiGVe㄁$:N6 .G` C˦1L!x`DB:Qu ͙.@i`A1qvth[ӂ|&Npzh\d2ϰa-Zg*Bm\s[}?:3~l-UU'Ζ.J&FZ,]5qm'MD(1fv_.]5 R>Bٛr^wKО.j^*y{y:'Lٳl 撒Gm^U/痨r?ƻjey$TTZuVN`A0{Cȍt}tF9"#NIﭥJPy:iG[4p"x^[& Pp`9,M@F:4RTz멉V% , QNĠx*lg^>&gϯ& ԞUwf8mP&熩\^|aj 7U,-^Uj^;m̔J(!|:İAK]Xd$TVKQyc2df f,6 t=v 4w]t謍'g XONRCTY&PuøSuLSsj+#A䅴yKpݥ(zU"U[K2T_Suet 8E%&z.}pR+eBL%9ݖ "D)Q)FXB9/OYe\/quvI+}~;ǂn[Fy;={!g2㮥~AFҙgX!ɴI9_OH%qV1(!:Qށ>r9KF<'SXKzQb=Cx@mY-` O^XiV^86.g~-gUͺX=̩$bY('s3 8`3* j4YKB̘k@-S_mo7@8G&ŷ; }b/l[YHI zFȖᨥ&\,{^nլkAgz;7べ>Нt6;t-e:+7魏Zc=5t)ZN7Py)߂m7{St,VtB٠tP И6UؠFnx$$]<.bk9pܭ}n ?oRF?$]W)S Ts]mE`SZuk\kZhuiUU m엻=Sݗ<l2soD7nof֠y]vXWlK`b%Z8oiW>̝DzyT7O6O„6O46OC6B烰}λ4B fFake󤻕e#ê\|\X2  B]\.%Gw\,2gUpSIp%qGN/nvzw?͒r* ~тzXګ}Gv6N 4.z2#Аqև7pAg-6GOI@_=aa^m#tқwvLysO˓ے/Ub fo{^}UC]=;xsq:[o仠/=lfioFNٻo"/fPߎ~?-mFb`pйiA+TjrΙtx[f+UF+ESQjEWԕBd+Fg>nz/]Q StJXWJ(`0pYWQU9IJFJ]RI'1#]1qEWLSzQt5A] 銀ltŸ+U.u]=DWԕc]qӃFry>✷hYCgr>hJ+}k>@ӏ5N&TJn(mqާ*^̞d&9Yo3J`8zBٟowϟVˋK[:t.s saKX0.2x!?n1*ƕ*UɇLLEywR./NN{/gꬭ\gB655w˳N&[Uly4*J[tO?C-ެBL~~i=49'v~:;J*GΊnkzϏ]S@<89?m7LxL!)f.(םV(0^TT|O?#K?O?=\/׽׷V˦[9tW#+K#ȇcj}߫qg"VZAOmwb_ʛn;W]=Q*gMVg@NNԧD%)HWFWkd6 DkEbJ/&+@N Wa.bZS1E]Y-tFf+ƵDWLm"J%prZiS4fBV`bJpEWԕڢHW |6"\ϽhL _bJ4EW`ϦcAI±u(U#]>-]AЦFj20qrL]WLDutE2]1+q:y]1EWԕmIT9v&BcXԈ6)HX1M.EO]D('i4I ) ]1&dZRS2 U]0`>j _WLLueQRf+ѷ#ÕiL]WL%@zl$p0.\tŴ|tŔ\Wj銁m> 벹N(7)KubV  vŸ^+ =,|tlzu,FU{e0׍|kw+MUTաM/ڽ#}]Q*]1.\tŴy]1Ţ ꊋvHW C."Z%tbJYt5E])0fggA@.\HWeELQ:ʥ ^ZBzp銁=f+-@ltŴ:3SzSt5A]Λte-*s"uEz`pttek3+OtŸ+Ed)3OQWNH#YCP4˷og @:K_Bsy\uIt(}giЫu;^^:}ɏG z:4tƞ7/ :za/+Cw|*|yyڞu>7e\>:Gh;^!ܹ0zZ`]w3 vPߋ~ UA筽W~4*ְq: Ya;=tK0#:5bٟx4,'f_[GMn_Ͼޘcg ߟoivͱPjmѧ7le/g=?tR2ffI,2BJ('Yɜf х05p6 ft+ܳ&/N<2c b aqcEWqO]aաMO3+as{W  u]1N鋢 @ k2K qA+}+D_t5A])P6'qVf+u2]8J銮&+UwK3nBc}`Č,Mgciƕ́V%?fJc'hiP9FW;vQ8qJ]MPWF)900gLQt ]1LSBOQWV6#]1i+%crFqM6]9 &+ +vd2&hC'_WL)K+gcfQ;8\6Ui2GZ+]tuhKzsҕD.WICr7.(3kwkWQ^u$ JuΥ sL[HX(wA/=ܝqig`r4XO(ESԴw;:00:bAqǵLè#P%],O{% XǢZYR1 ]QPj]59wKs&^t"+ T6Sl•"k8Je{SlmEF"`-E6b\iK]WDi&+c4hXFW_`J])gf+vd+u:u]%}JڃꊀGW;q*cԪjVl%#]1YpFWL;qEW̞Mou&] GG"֋IQBb=]C^:Be+FW\tŴV+tEWS8-㎝CkH^WL)MuwvxK,_{ l&}Db;)2&qM g @Eb U dYn'ˠͱNT#-Yϲ0{ͥEͥcqGm0K >g̸rѺ2eYg hr#`k\tEN&Ô]MPW֙ ؍^3W O1тK~2H]MGWN)'uFrdq}6ID+! Sb+Ɯ& >]iA+T%ѕ۳ݱPR&+_;k\Ͼ×G|SgC?mxлD3 ӛs9RN08\VIq6EZ;W1͖vVSϊ@󖓢7/8jiovNO/WO?x|j͛mNwME^+jɋ/G.OG}%!YW>ts麺ߟO~IBVtz^w͋/v]KnWk>:=5w| |qlބt=1-~?<=}rw  BˏxfV7:}c;晗gv܋?Y̗O"tj< ަn+݊ T$=ɠQ9$+wyu?^V6~Z/D6tٜc1M,V1E EatuNKI*P9kiN0;G;2 nm#ój{FIr@K|? ,s ZZlwb?$VKݶ5X&dW*ҸTv/VGerAVOPCI3.[n^, @;ެo5W,Y'jZUCnW͡ꨫkn7s>7q]y2!˰ C9T'9X[&RK@ _17rYNۏƮHCL^1ֻcB$MYn釡y9Ôa6qe9 $*eFN6e؎al RJ5n )FŦʾ@^ V>J¹iG-ʶ ]MBRT44w3IvR~3@MN=P-ަ$˚MTYULөPk 7 VU<gc=ȵ4NFtSHcjS!hVZ;F"3H$@-;{0zqD'q#z:M*3p*RQ!#>P3œ&!=1="B8Q^ocCOs\hWvнlxl2sϸe{|PJ)3OϙƯi1eHi|g3K3#m2]H;9p19<\|r9yN;)9C㩎GSU" '4 uRUn@ J!4i7 +@|0!S&aPFe)A@)+Kȳio'd'0OL\*FDؚqu>|&>=:\ukvwL˭%(uGFpǽCԢV*%uXYP;O4#f܇2ф>u w->5ey wD@4%)"Q* J3 8XAPVdix=t=V ; EO*=N_{FGBMVhRM,$cXHh2PV. NjDX]ĉtC]HM J|.֟V'EECV_2k26 A*`#)( LY  LuVb1qQ(0֣׹g&+u~ 7T%oڿ*Itq1\r2IΝJAmF`Bk%1R0!էtVO5rS\?zr5n>x #ϓ"N'q=TovAVxy1z;6zp!Ւ-1YW3(lfYBW2 F4~^Loځ.7mNfUw⼳UvծJPtdj\]#a|R՗%ȯ%Op6}wXU)tOxue~b _o7/|ow߿GUp$' O{M4ihm*vDfiv/+vֺZ@~~.EܴծG֚`OJo_@^IRE07Q!z ƪ3.1ă$r1_dp4ʇ@aiʛ 4G* yO Il'áyyzs.8 FAlgi0W*J,8n;D€"BL{<tǵg@Y; uD=4@{h~+B|ޝ_!VJېMIzlƼC9Ȳo WR޽znQ|*EGg=8q7={z|nƮmG ܯ#.5m9y;-u_ٞ -Tw5a+'eYLSAy,̓*gJ\ʙ"SoN9ۜ'ͩVhFkdA:yDDPv¤W x$ )[X1c2b=6.LBZ"zFΑ ylojyS%vw @ⲝo=?7ԷTW̬LQnU旝CzV;D%3Q/LBOubx/iIt7wMh޻nWoxռ)Ym[%r>S_MC " ._=P7aV\ ׇmzO5E>ث7m?}9~y8F:$K}r;dc0q?뽂y}Hv&_|v^tGR6Fh0+ e7't=U'a#%C$9`ش| Ɓ-):ksR@$`",ٻ+wWxعwЫ'=;+9o%ٔBA,:IVLO`c;M{ ZHJ2fhҔ&o/ШkONҧy@%/R.%_Kq=Zl|X#Wgz@r~]RgJa@ӌW: cu _x~UbcGnĄL+1\</&kxSS2puGK6C[>Pnna/}_L@ mҕ bdD,x~v2Hꡅ+"󀉖ҭ)2i…,=IH,nt;?ySt)dOO&:.>k9zRx2b/Hz[3BzCWc_aqʔ$\0dT?TWʿ7sIǠRc PͦY+f>ݻ:EgEwcҍy'Y/is(eTy{&'.8]}t~>\'_Db<Ff2LfDYqaĻ3PU{dN*j,9uQ5T z[ ri;WsrF^;G5bgrEqcI)I2O̞; Y~mvʹisɋ"^S 8©ّPJ$npK,N* />/Ne‡{v tǬ?/>ynQ| ~ȣPT?>S#8qf $﫛rբzvӷo^qk\l/'Wqߜ^xj.a7}r*o cH׍o\_9M7ci>__Zگh!#hj<BAztEGDW . 1pte(^ ]1E`:"1xfq\+&  %BW/";c+=&N+CzPF]Еwi{WfІt5P\ҕr[JWxx4jU.t8>GQp7? ]=-P}3+]j[#SoN+ ] ! ] J^ ](XjՃWW.tJ#;<&2CWWЕPBW/yփGF3aJ{Zw''㊍K}BK޻&]7Igg}odz'+8-q /?Ź_OjvNV?o7lr/~㘖/,1`>X (\;\n}.7zE?wn7΀ͮO]|WW*}2CO^Ŗ+ȶ%iFFDռh+ʻfcYvxa~V_6癗njk{aa#ǩ!kKKI75!{Ί .7}a,s ^Y:\~5[hw>o4s|t.nԇջ>_ġ'_S߹*1pctYAc3g' }%q|9j)jޖЧRu.PrTcx{tSؽk=$/ ڊG3 3Θ'-*c |x1 j-NTjEѦAٔJcjuUI3-0sO=P̙6؛5TXmm {vX&KNqv!cތWaA;.h s>WhzK9fMuϹۭ2q{ Dɥٍ6"Hwh #53g?8i}G  ܳh)IoI*%;ZYy6Ԝ%g ($Ƃ15f}8oW]9TRϹf$_n*3u9D>2z궺If;ɏEIRI-Eu.j%4\8e,Cpu!YX25-F8h",:zSbR@՘[C ;n.& {oƘ9fES ֙t,ά=Qcm]h4h2۫hb1i 1LgύZm5f713e~*9=G-hP@uńU2gJ#=e&k26Kq-Ym[:쌹5RT3v['<oN43`+52 *lf\,MhN6`{'z H[)`J6N*T|],Ǣdd'Xt\EM)!] wLp-6T3fyB#fKxKbh LH,f)G#RcgD*-΋ƸC43 Ou0[&'LQ Sb30Rt8ˤ0L Vɲn7X_Ŧ Fn rфpᑬjYU3Ei!܂Ÿ`-1S8ue' K ƈHDI1RpT6 ܳE00M144 iG|) R(3&e! d#op -F\,kI֢0g3[X)I1r>/~YlvӱcQ-8i1&?{׶FdEvYyE b/3&LZQ ̿E(dVLcfZd**32Df@6cy`}w 'f0A׋Bo]˲ßXp 49ac[-k.Xjja%a1%S4lU0J'. -Uztᘌ8;`UXzaBxPr Ұp1!ȄO W$2V** <VYѸv/.D,/xݶM8ڂv Ύd P?`yrFXw%,Զ ))e% `~x/)^ڼd*,hb⮆Xm'G]r5A .0_GW1*Feq  B{ hwP!^BGmhv)0ְcZVs^ πB.dsV531chdr[҈yN).63`,QMH>K4К5ֺ0LC:b8K4YD`a>ym`JUΪrݎ2+{# hߨn$Lz`tkӤy !kC:k:8,fxsC < X%]zD[9ꢄKh,wੀz`B#R T"/a wB)ViV#`X7klUG/7+d_p"\x>W'W` u-ϘV0Xة -,J1<*G`5 C{=k_ڎkeP?yZ/@E Yq2"Vl,2aFZ;ԈFz@pkOLA IߚXjt$ ~1X>-竿돺uNGj缘;X3xIZo^CI&.NSv~ˋi}{OSͫ|*Wx.Y_w*KWtB1NwL.WK3FsYFw\t -hﭤ4VJnju*fIه7Mʣ:%34@J\Nֶv bE/X1 Vdt3wk"kR.tU Wzo.)͸MS6`|q-~~-2 t%?)/o2[Pt?wr˦z:}t1p>T{X'wtrľCr2nmon2|PKn9[Ѝ`ؖo>cLPeZGr+08mu cEU㑥1lɍ+]~9ﮠ  ֺ\k7z=|ѿ_=Z y>T={[>~ W69tqG lϭjUs<}l~mvB w.1+ ,o9+qBJ롽uWdtllzgb׫=r^_7p/rrary9\bˢݨy~E}ng{y uA,%y]OyQ,xlA}^Exu6ycl}تߟrnK4gqerzcrxwRƻR+[?܌(vh\ǀc1j?~3~g_{s7K\݂CS3X_hJǐkE8;"4_xMw1Je`*{aՋ ֪yeJ (L>-ɉkL{.+3,2fh)9P#gjfJP1sCmtl;r]O6Z39+jˎ 4/ :}jΎnZ>is6>ix7LN~^RY&╭U4 `t՗5C8bkst 4FGa.1N_[6ӹc&~#n)Ǽ^z wj> kSkqw DJtY kѨkenx#3 AhJ/rL &Xj*…"neM56Jɘp(`M 踤|[5KƎj"H;*Z9wSlN:=+w$}њ%Z'[#8.5vDk첤ryOkU.!wlvVaKcN0q更 0تB3ڇtžOX5PN)X\hgZ9]D#xf]3'l& C9 ᕧM/ 5! ggA ~>/l{X_;pj\ sw:7=fw~_[{y{mm9[뮳%u 卐ꥈb1^EqsP]"/AzzG *XQ:z^^2utjEmyAgio;rad(,E&Xq1-%bt/%Ge)RrrY35/`QFgd~]]u1 A4tO2kWۧ7q@Yyׄ}0gÏ+7W^x[/{btBm )Eh@h]?[0+punyc8gnf~twfC\nv޹|uhǝܭݝt|V˥9]SwwmTsK#sk ǯj=cT{3^#7CTiNAV9ZeRIuW1RE|*dV.9'.^)1DTf=Eȃ<-k8']-?ɝQѼvH-UTb:z@,-bR"SPI&wKySYew%Ir 0R"*ZVUf)P_CR$RiHqbK 48`Rgɭ*R&.X]Ȯs6h:17:$ ak:Vog{5f-}DtBg$fD[}6bF܍\̨X++AbF_R\2`WƐO l24iHќ CߡSf뵌Vˈ렳v[$ f$ B^FLRneFE*a(a⼄OJ^-..qǛO;mŻ?!@eZJMϹ歒I'A+w*λ 6UmI3F<|W w]aWG~:扦cRJdQ -@@Nb(VrT 0k M2nGy/L`>?w&yoK~r%m⿓G?u;2+O=逸_J@*{**=R @L9s U֙)8>sD@qL e]drl&bQV]c9Ko0is$98sWzƾX+cA0ӞˋTj/ڸ:a{/ohx<}?/jE -y"ZL;E-sLxm':MU,dIP=+i l6Qˠe94L. P:#f̹t6BCAjc_6Q3[>KDR % U 93xrnH 1 H,XW2 m2%N eZקvD&X]& Q9e6+LH `]4SW<"H$+][oBQq?YM{s,K4=D.㲱X2 )K;h{S.ݏH3t.yP3zٱ TT)2blY z'ъXYb3T%(>GC1Kk642ˠSF0Fh02l,^f뤊Nv"GOT^v}?v:1Np2>,gUOe!&[ 9$Tf916)͓ LK Nc.Ω?Io/&kh#3D(Bqw"*gc@] ( ^F5 9DwB Ȕ@p lƚrЖljY([ҕb;&R_`.&iG8%`hS ˆ7I*3ǸLw ݭ1ҝn*6_yw9qT6" O`$Ju%]k` $I=ᰆ?q7E\~)DfĨ1,=+X3RP.:UuZ\%Z3WNbb}Ww>sxUlU}Ӥ8&-{0a?ejmڅ޹OIdFX ,4ѰLYW-9fXr1k $Pf :j X[ XRh>E4%5|1l˒,DOOf659PsNFZ#07"Ge#9itO{}4Z&0 q1$Òa 3>ͼhGX İ%r9VBIk*(Ja3pzbR̅KrR\e]͜{fP̃ Y.M d8WFܬvWf%{wѶӘ8YstoOJ5D9%& &\ BkFgU4:ε#zPVE<KÍ!65}9"̹ۃ/฻tȻR/=[cK qE!?Q,$(5BoF/~b.oȵh{o7Y3z0<=4.nuLU}pyF$Tjx:ɖ(s~<vdhq28tte {N]?lkJbϩ ) `Z!3 Jg@ *u^s =T?D J_@?ƣpil}wnF(io}rxzsINǛ:1`3kwFɜ2 &;;} ;%tY3J_^uz`x` GU (ng]@q([N[g.YU`Ff1%i0ē`I+wV^nWS,ޒX;9w$#vq]-SU|%q)'_\`RZ )YN0)ft>``ŀGJv.~-ɖKrdn' x4Tb]Pgi;$Zrk_M{9xw;3@ˁGi4|UUrZ?4нS+]iz)Gkzur }q`dͿn U/8]T<)9(J>?Yy!krJI.^p}sRsCޜTa룤WU㤔6;X/zɬ,Yk7Ctg˪vk$ao^zz\{2I|_Q0R葐ݧˡY.O>Cy,?` cnd MO4&^mP_MW?zQ4z6/5䗷MEI9w{x^<8rq=rv69]ҁWo #|KUǗHPxm,& i!%V,UƢw>ffFC, V;W;P,Z3q؜R<(3&1&Cup9GG`DO%u +"!I$5gqm h)R8X=dV",('jEVEFʀR jJ0+&"]H"٨SBR.١^F̙I.ǂ4[dє~e,+K.!SEurL,)z(0>B*2KD CYf=̌ ikq|NLJbIIt0yK(k!XYC}NpMߧpvձ A5w d[=G`{.KQ202JJ -_ke0-)0J΁jUQTE"O=Ybf9 GS繯rvX1Sm] '!bV <3t PKUڷd|mn[jƢ6l.$)EQوHDP,]UaIy^ X"}V#(k\dBô͡F0^PLOړ1"dxX%ycұC^{.= x۪ΠL@\Z6w^M߻:Rm:cNJovlK@.4jht@9.mlJ -"^t"Dȑݢ!jE;ql6qhoGnXzH4I@'!,:şR>[YD&cc@3qP" }.7͎2 5|6}tGUuۧY*Vzմh8gbG8;gtʏ Au;$NjO9_Do#՜Pxp#3z 3ڞﰣGC77w"|됦Xcc734AֲK/dfWw4MU<ir~Q׀<􎦓+WWO"d꣩~L6GA]ȶWx7;}?4UesO}[>/|b5c{}~>kQ_kέ{ö b;2aZN-TsOZS"Q4z!sDTtmX Fuybĉ&+` ދh$L֮ꑳ*&`ʶcQ (QzASK+R,CMѺZ'5灻 loӫٗ]EI^~_wQ;izk[*ml]+LYrs*K_\^\-R{Gȝ-TPR|8R9ëRutݰjn}_l/~u-y6r|]\y91- Nв @.kuj\|wLL>40c{Vrnoھ|)l -ѡZh?mwm2y9=c7)ty#!sort>(/d֍ 2qF2qF@GEdU}!-S1%)' "9mw1 %ɘI>I$oJ| 5F VFz3qK!x~rEg]?j+,xWԼ84fNbFL|U Vf?S*c̩^],.v ߨD8챢x/3uT^ǻY ]|rО#q\/I:[ (F)AmvoUE5I2 !OvnTK7]O |ܶYy=ƨ  W\lJ%FMNl>-H§gm.!.]v4aɄ;#S]Q:"w `FAX%>frȘe0f23ƴ;mVe))3Q*Шhc~qN+:9/u]PߝEB&Ȏ$]432*&PqYzTM^ŧtYp#%mRY2^>C&R3=:]ȳds~=Ƶ5i;Njo}^*o6-:{ ?Rgz|dدVx)uLS.YfdRI;ʞLAȑ)$;`p~8>A;s&[G@mS풱%Xz(!IfZe :7d=3bS"Zovؿ|W{[RE?N,V~ly>o%jJlM04) c"@ bΫ1bؔ38bdV9 u$Jٜ* .!JLJ|úE Oڱ)髭z6h5%r%gl]_1<6mEW-\+holeJaՀu ؀Yo>Va~"/ߜ,!oN_ׅ;}}y,~{jw%.8iAWڕ'qR5 ]^YY+H^5zC{c$ҽ;/**D#XD(ؘ 'J*̺ˆ,ɱz:XB!d4d**p(e b?Y?eb`hJ6&Gֽ[_QtַcZJ5[ y/JGpهoozj]2]o/O=-CT7D^eIEiy^z r "EWr.1mR veXVf9 Dm58R\H1.+ 0@9l.mKӣ x_3, 0,tQN{bZafAJH0ZKtj;/ MMO55mH`I0CDcфrP0$XkoF@)<)Rt0 ט@GjG4 $]O'Rt+gB B9Gk oAB6K˕)9Vgv>zH)i+ ,2$>6k?ָ饤 ĸaZ{(?ꟃK '՚K nG]')=a@ZA"Ioz)(LYS;:3 ^y dE-6RkB (n%t ןC qlY bc,>-{ocLV/ Us8&"g_-ՔW1av:~EQLꔡ6z \i.bٴissI:U2=0*aSvMYzZ^VT'糓7u0kq!밈[/veDe>91ŲsGxZWkkbNUCUCekY#PDqQЃL>.F狎./gGkk%Z\W뺶JȲ29KEK௘>l\/_ᱪP- †E8s w훟޽M^o_=:Dwaz`\Nc-`<Uo~٢jVPߢjUÿ[luKlzY+vsk@~~}]K{ծGl5+&+u3 5?3 5ET~tA`W!nmڅf_:Owte$AIK G)h5PЃ=,0PJSBx0Ԁ;?W]\bhĆ~v:c? [EN"a_Iך1-gie_ dW1&<Ą[>Ąewwg`t+Ngyn0;rtG!x̋ ?\ 9<^rÜ9Q [cI"!t!ScOYke_j@Ң(O7i㬲#r/4 <@GuLJw46ݡt7l yC%1x1[C+PtpcMҢ[}ZeovrVNf'O+b]_pLg21) -q'04e\,]ܛp:./]+mkIP9LIbNa ٙ'ӷuQH'S+0'>8\PeS4UKPcD2Y+1тFQ4`pHtgkp6sS]*g';r޾uK 0e1+ƅtJY<ÅY0!tq4Z &nU?(x:t:i}fjFZ4҈DJ˅bD?d 2u0*e& x7?\MJ~+#;n'G3t7^O7/zSplpcdZђX4jndࠧ {i1\ )Q!r#1t @3HD%1֥evU  8)>q usJ XHǞ "k/%% XZY.8fظOxnb(5ӴS`ɻAq7Vqq{1I#yS-*ִ8W8HJ #94JYFB\`UPxt񌧳QZ[ceБz"SKm5ׄ1%Jι YJ,]i ҋyZad7Jr?XtSS q9RmFP #)~сm!X0Tz0[H($qiN&iFg7?01t=  -KmxREa/dGdHWU2T1p05oQ 69e uR4~ag5  $aFA] 0qma9960~Xcwl\Sv4O&uQogvP{T4 g'ssӋ_92LsAM}?T9.gaZPRcGPyPgd'R y8 _Z"jRn߼,+-*sa(f`lX|{mV茣[,oGV`F̉)|"kPBD>Cp2$=Ѻ]VLp/ٲ񺂤 1AS1A~k m=6f4918:a:_r8({X-AN-P@5j|k9G/U{}q/w2,-͜4 eMl&w7obx%lv L㺭ο j\.^+;ʼt!$@͋ zpM5o-w, )oSbbU_\ӑ9+TX촻L('m t+]hzn1ijI8["?<Ҙ (TTTh ctEiP#H"[he!ܱ7;qҶS [/xv;:N+ & eUGLJEL Ȫ$+=KpAدY?6A $C]TLnQ߭'Oxu'/,<[ZLP74lwѮìن24h> xM|Z5Bٛzb^R{j]4ft u2 M9k҃]`\Z!wJ;f o8o}V MWyj;=]mRt;c!{DW)(ὡ}<]%+.kGW/]% ]UpZtP؀]炮VV3{tFT#yU 7+[ նBal CN'7?f~Co`p C*%ŗZIv[bɪLNp5]鄒t4"iZU;b>*^Fհg)]t;&Uo W82d"A͠+"$905qi as:m`gE0{CW FNWRvb^$] 9ާ X𽡫W}j_:]%Ƌ+٪otݟS| .SBW ĻNW HW0L\ dpjzƧA}k1~5yWEO4R kGUw.o*8Ⱦ9)aVnz&Kuho[ Sw*RH;`f<Y6Yv @k_pZC7~QM]ENug(6JYQ%--JЪMZ[^az: PVJ1KBd8BN_EHN0˾ 5@G(<âcEzGadm튮jܾ^f?dO)WO+FjXYW|˽9 hqXpI!TnkK9~;L<]mRݢ+];=+,JpJhu ^ ]S#t>i*l*\ Ř}xF}ԑ޹mc{_mp#n7-v;TƷcܢ=G~d{FdHcY$?.,Y^eY,@O^_B)&.VpĖZTR k"jܫ҉[@eraE^*KuE凅WQ1E9-pAֽ)Rʪ=K+57MAY?G-^ 7x\?~l4UͶ1ڧٍ2ֻE4 ߕJ嶬[FG|>"Mx,%k˽}KGe4-(>8n2 5Іd6, ^1A)Yu@W]+fɯ';+YU*q2yRS5Wg_OhV=N\j/$J*լʃ⺺"2+AAz?-~`,:@0hqGe;PfiL,W!Y|>.,dr[/OQYD|_fS H͒9yLGmo~yw aS˓2xPB:;Q(#5=Xfd5BJ8x떿bnrHrhmc&gl"ZG`dM mu#|tKnCz#<23NmUy dQЮyĉZ.TI/^ߏZZu` 2**/tg 3^yGoe8hj_i.ɓSly)b4B NxPb2ucKGD[ HiDؐ!"Ocݷ/`|[-*ۋ%-IN?`A2[ր֫?J_(J&=p?3f9'!c'j qˮb"'"xC9ą{Leu!y-~f W[a˨/_isd3Ÿ_#Q,h ǯ8oINoGXYo1ѣjrq_O¥FJ5ktL%hB_+wpB4bb 8qZ4NDbYq1֐fT8ܶh2g@%eqP< \`c$ RTKBmYp0* 6@2\\w$meL%7"4că &\\kB _~*qA\&  ~RH3V+PkۗJn{\uWn ppr W=TYj@8c#=LhyZ R=zʹ$6 \`ɂU<\Zm}PWς+ƕQ2 \`Kl0BL+T+ls*.xJF  = PeQ{_jo˕ Xp Q`Uk=i!%" \`PƫD{\Jk{\uWR(ET@B+kT(Pq*)qA\)a%)3#r WV)q*NJK& W XpfDQn83q*oz\) W X( P% J|UqeytB+YjWjpkVEl߻j$Ӷg-㪙ZNTjϖ=z6cq[߉\Ax(B+T[0YpŔf+lx0oKL- UEgVfY<=!%FU}x5il Qա`V[F1!L[*ޜT(\O5O(3# OF)5 怳fS2C?%C9QBOlǃ5}-s #\iZ9̒c3k>8QVr_o~__T7[(6eQn/Pn!Q#g7dYAL;."KSCRk u xH"OtbL` )[>0"Qe `{y0v&f{5ܮugi ˳|-FP/ϾK "ؒ^dPTT1,!ibc"cP9#h""tH[д(lJ Ȩ-$;𭪄T]UZ7"+`V[v98+>j૏ez,.=9~{Q O"d]Z`⛄vc5o괗gn8ugN~Ou貳Ujƽ)sF8?]ge^~qq^2K0h8uDFljָ]3L?^'e4Xx$Hc bd7Ix){8nϦ#F^tP.%#`Fl,*,xF")M riU@\C6;rP f={"7Yջ]0Mro6TG>Gɦd_m?mWh8<_^Ϋbf2Fh{/Y g 12ks KB+[yo1|,/]-/7^WbjOI1}\-7e[ SBO>L&+AȾ,ߛt//F??LXBCdZ2 JK{dy[Ћ4GW1>TrLKu_^5ttle۶LXe:/ǦW W].s:>ʇRʿP6a(.o߿9C87z77ѧĘ"J6!EJ"[% K6SUTuSo}>޼ 7#,8XNZEPACޭ5ukTn]@̀r_[{_}Ғ.J(˛7(>y; ޹){lnWM t%s==? j~83.*+ր _b? 쥔T[W1׭>Z16HhA~yT?oO20p4Ÿʇ@A4`E:2 Tfp'}y!f|\~ekr:DۉSZַdwy3|NP4 bCL?sN;ow- xGHWPio9ݾN}2.'N*o}h!ۜX5# {w9Z~ !;fN-mZڑo7 UZjPvT&&b4i̯֩ (ttӄo=?G: ^AYS߷9"y_NBGa+ab"OZ2-ˎ/g1*SGa?7ˇO{_8ҧU : _'k=݀z8ҕ>_h V¯˗ӏw2I䷧h9Y_%PW./+U(b6ԀZA=< ?A 9 Dˑ$-~d1IIXwd<S_)>LMr0qD\&C[4D"Fg8G(;#5btLc/Y;A3o$/8f{}TPh2Rh"5N1֔+-:5 XȲݿ,~mdv>>o 4&Ci]9kG?GY>B b^wKLQ=;{\L*f`J Whmg?;yyB[: ($ F8JQ@,`l0HS~F,B#BXY _00+C |]3l(5 r'U#Ǯf./nӬy_ݒ(&g4Hjq!cB$U2΢)fÄK8rÈ#J0AtV vh5ZL̨3hQ9`) Nt\*:d2kp6qkZuLzLx=k6f_LjH#"+-s?(<̈́#8Jg #͏7W 7p`cy:pn=< n610Wn/ ?KsO^/zg=qԿr6^h9|wSUo'~Xe?|JM⦦mOL؉JĖ?wiN9J,6]zte[:6~7mzLNAYOUV]l<"ܰs'o|)^{tqp߼r˂H=Ҙ QQ1zрTJPiP#H" g-,SL2spڈ T0aCJEPH@!sN`XjtDh^fgQU*67T|f5G}952£UϘ㎣/ tODxL Y E<(QSQ! s*8\pIīH70x?&1rac$GF0s zgh9B_(r+AYjV[h V8FǴMy"orI@=0+H8gD"U}ȓmLaIy3@9ii !',u!|Ri)BXNLSnq'JdVlCsOc8ۦ~t\p ؅VMkyMID#gn+̭*.@ROKҁ 0$&Ddp:s6!)HNF,̊/^ R[0`$tDX2:wPpQzpǵi}9CY\]CnWY\2վr;H-}KLipyf.װ"PeSz߆A |n`zԒdkN\?ƮHwOK)x+…y#01N3Q]k)U-hki]\i=y8*M$)0 cm,WS 0T*JeZ 7Z]b;:)fLhB("[%*m2QAM@"[X`B<UX-zg՘IDk1hNu>Mw7koyoE9 Mk;N{YB-r}0 ֬%ԪτevK*{ɬav\;[YQLsk%Zx0f97dj]ԭ32FV:8)r^Ϸs{^jy>L ͯngms~<%',t|Kބ!~|dz१ސ0ǟ751~iWj/g?gX/{no4FP+) ']UݙdK ^s«WT({S~0ʏ`Ӧ7 ,ԣ /r&F+X1s|~v<0ouNijII)`b8C`cZR=)0ááEҳ)[maʗ=;(+Q*^TqS˲ UlbnLÈrݭrQOUs8+MeMlҺBS0X{}̐g[ 5%Z\P9pִF%(,Oxָ֘qm'r8'7n_n\,@^Yn.zcoB I|o@[2RLH>J de޹ٝL֭d @ >gKt-H@6h{uj]\/hթ(2ЌLLU'24^8#^VFۺ-^~wN>ξNjrObMQg?f̲j"R꽥pcQyqkm HS"'->uh\sbYE5exd+!V)y]C+CL^7睏9C޶n8u/Va;7[/(#i-K*R[hɫ9ғj~v+&"l;xirWdCv`p ,FB߼R|#F_.u,+bWgk[ lzW♧kO:0%!rռsyUnݑls4숃T+k9yoˆ)1ѡ$ lpc7C.mo(Ӫ8{~trt˖*\lk6.ݲyzND1uՎ5 (t1l ? pxCB`-{ގyPhi߮ ܘ5yWwѓꮴYɟFй]IczQ=LMr>;N7R|Cl4RY\˵`&ÈNOʵsWP{vl~0jj}O:6 m`+0fM0n*YO**иN&^4 k)X͈ 5#T=T&d1f+p0a<\/";0 "ӏm1 Fp@""ҼsQs[)-! uH%bCpU؊A+<#6IKs^SlZǙTR\]uS 9#6)z͹]~8fbpqqѵo:iɖf,.:eAtLδb%f葔 aaȆbP#\ٓ,Cs\| \8J;vvx6^Z|ޗw~(~ ?$0kw|v|A$)nu'tCm^k \u \A\Qn_KeKi~B2;vઋþJ:\u)q\b c**½ @T:l_v^-~1߳O_AWgs~xZί9ˑ]dͽߏxsq#w?שn=wSmTo~\[ۍ%lVPċu.j/br@T`+){i`8?V87w+O͏'X AXaY:FZo_-#IC 9Z.PVKŽT:B.HK!, QZ$kL־[뇣\y-~J:OG'~3-ꬔp 7{$Z21QYbpX+:smz~(ɋI1T5PLbQ䔱R1kܼ*.mkæ֐黙~viwJ˽D FJ;Kǝbj?UǏNz Ԛ$C7VK/_ >'0)[=cU&Zu>B8 c26c6b6`n(eZI& 9rx?{h* CCIQJ=y=M6P&RRa,1C~@BHbaUbi-af7*#ŋg+b7-Ac^ώc71k-a-;֒ ())ZRbɮOcuNEݪΔcm}8*\I`U# 9kS4J3Dxhah #0 Uu/Ccq^ N?bSĂ&i5^HaK!$ڨ >+ɜ pHDRKŃQ7 {NP)g Ag!ՔG1n/5"#1OAe ԥo uJ7b=V$}.7>I0Q_&\JhR}oQ xj,v+,J*׊e>jj͌#XâU&ʑS-,j]rO%2 JscڰH\NpP4QNapJcuvD(J#m'd 67[HQhf VV HA]FPF4p.N2RX(S pU-[%0Mrt**|- Ʋ]j"Fa5@c=U)`B]BVv\҅AwS 0`S@ C=RX0,$Lh /*E,x a|oe$h>hpKckP\ڌʳmc\ܣ_8 Nl$1K) ę!G"G6l]]jvEi4dUt4كҕjIJ%UA a1'8Lvs_f`!3s՜@$rĜV f^U0 6!+: tq{M ^Bd,TɂN5~J4T1hFymV Sӷ"CF糳3yUq0O&!z*4} ^{@#4{;KSv=HA/&9І/AwAK|̡P=n;t ((L(,RrYf}JPD;뒄 X-0*y%HEBk!3ڄ ޲VAqHZ١ Zc~{mȌ1)Hd(J+12@!~ЃRA*8@C8("㬪p*#X(&a!dEA6~S('J s6?#XI+,IC5*"bIDe6RSҫ*p/G`A-7mU(@6 ~+~M-i0 z1dk6mۃxڹO^ytYea|M8];uL U@ԭGwP7ۭIGOBFf+q0خGk'Q4 ͚w)DzIIbɃk6vPޞ%A89)͈=Xe5h7 JxK̀-ʡmM1WClܔ槟־O)I4hqo]/Ӡy*cf6m1TS }nB܋R+jýlu-e/qpprOnҖ2n?E sBdC,R;V׊,;t;DZ|JC2=!`cDǣh%WxH[-MOlzbĦ'6=MOlzbĦ'6=MOlzbĦ'6=MOlzbĦ'6=MOlzbĦ'6=MOlzbZMO#(:?pz> <> }@b> }@b> }@b> }@b> }@b> }@D; (\Dц> rͭ>WRi> }@b> }@b> }@b> }@b> }@b> }@bkr>$r5HYǃ)/{B}@db> }@b> }@b> }@b> }@b> }@b> ljZ蛟.(ՔzLotKku̺?*닫ϰr=(plK7ڃ-pmKR)->&s@pRW#pኢubj22\}-p%ŧ=z ^|slaG^vsEyzdzLR|:\-ezʣVi(`+(\(Z(ʨ^!\Q˳υ+9pEEi ++)ѯeH@[}xkՕJF}{ڛ-O7jPHطV:=ϮVw ܑo~匳./?-9)`o)QzQd~(mkh{azCz9K}rV̅3>^gHu9ـ.'Q;oW ?O]% ToPK) l;*8T^d~V'zvŶj韭>\zm_yz}wW3~>z#+׶:B?ίՂ(>5UDOiKƲ3ukWZO3J)-ZlInlBl7#m{o+2c0*"ɭ9֣fn_8\V`H)hvBze{~h̘X#[8Ư^_\ҿ\=8oy7o>>]]&͏{NjV0A/XnocK3gawqk8q|qzsf eEz5Nf}wtTw?apZ*rҿ*,3th*9Pa Pu <6Qj*)nldͧ41Nx8MOk#`D<Ĉw+7U\5ZYE%z6WUeI.U0ע;!"NÈIãŰ\9Y/y".5-.ȸ (Bͩ^1G&_D( p9bpqKĩzǾz~ |eQq|p5 ^_xs%<:#Oz+W㍟7?'HlMy=TӒV')kVNܐL'e3~єM }g_ e]T)h|]lMۦFɔL1W*2" NH' aL==f3?vuidhvs1;p^&oы|ܔk~>D{.t>SRz&sD.TWUf;]}O%M4Sv1VaŎ:ge}Bh^,ڦ@v9/B$F4;d0¡ R[`:\z\z\z?\RvJ{%/ZNntskN%ҥچ8)6@2np戕VkPz`%u<4W'gc;Uut%|0|!if`Y?v>uQN$`cAFSRl}P@ܘ~2dT/ZiK\{֥NZ%i35M$QILH?' + s*|%3fǷ=2YX\B]!]_,Y[|Nv|e)uNHmhUlTeeEIE +L9g9EgJE[RH&3%4^)[NFS& g}Uߢ b+'kX&im,_JYKs+i;'Nc CIz4)_e%d~!lvs1m8sNE{?w ӃU:\oe^z^fVq"ZӫVߎ:q>;;k󟷻EnuOb]n;6~O'j鮻pMu,"k\QR14p&5Qe/7NJFJU4[?^^ݞ{q3,7^}9ocLc6*cƗtZ쭵N55mbVy&A멜Qp&58~a8m='(9'/j,`n29HیR*[%F:-}je~*8z(Gi~8@H 'KS8 I_Ŭڮ.uJKAdSi [ k5v.lJH λO**,IRhKL -5R&,Peo^dVm.s(?̡٣yg\͖lKWKʉ;tO7Ϳ{쇣<ѻӫ(fkٻ6r$4id..&36rˣGb-vZ%YjJNb5EVE*^e&}0:a{E opt>WJXUFKqM>IYwV85 τTr7 wRݛźʮ^OS)z`*W8OLj0'ټ M w$ٺA:}w&gCW)|}Sa0#_ɰF J(J 2]ˆfihN2;ߕܲq[ER !9T.{ؘg<,S~nFY=VSmY(aj(poѼI݃pkY4V>\ΰN{"@}O>N4 A(3/[ƥTR?g:Y/ v-{ ix iߗLZxoޕ>sK$uO)0e3M)!Gy F~Yw^V/n9Mo&Ĝȼ7-pZJןR>U3FΤZL xo@Gϛ:uzܙstŐlU^09|~)ȹ O$[(1nF0Kn=`iPm>IVykQmb>.%M{ˤHVa}2g+x}WO~0ifU&PRFzIrsM3>oAҦRܶ[?ȒrqfO|9[L`! Sg*k$T'%I 嚎~}`jFڣ8WzsPmstk/' 3MZAu!KBSM޺d*5QU*!5 M=ue֝ns$D~W|Bg 'D/!=782! YaU+#Jc$IDp靜)>gG\s7 *P8m  B4ԌY}Z:Yxxr{,:Ӎ]ҎuC\xkgPJz@8y<a.x/3!/W![N3@3zsf5W27RZn5Q qG㉂,I3NB ZG9Cȇ r J8P 8 Ii4DDMCwq'  {P5;Ϋ?dF0;MTZm&n{n`j,Q85?"DښH$5D"mhe4.p\4 $k-Ոrc%0)C4H;]WH1}V?Q L'Rɠ xLTBdSWΑX#o#~?R|$`Ik$*]40 Ebc S]$ÌpJZb=%W VGPqPGYMV` $ "8UInٕbhnll4E,u^? fE-6`Rkr  +UڈCzVWIB1.I󴶿 >3YJpݚތz+|(bRGgnRd -)PkBrelYU<&5'I0k1W~[g}; U+~z}m//Bm-i[ik5CPze3&Š\J갌 <(ӎ/G筭K6:ҴZHtGPC`Y3 *:VV1]SWT3׻u/yoߤ߿|%&__ v?AC Ohw47m*E{o]w6._݇ӇmAsZN J?d?M>*K9`yQ^ jvM{ t~%x5|Q$h_.! 0n@32V}YDA]9_d@p4ʇ@aiEJi TfNzjt1؎pϜ[aT~Xpw9{E3xx:\稈FĎD3.Fu}99zF;K/;+`Չ<ԝtI\5Ai=n57mڄX]$jȉR4gKro+L2֬(m`D~\]ڋuD$#'!F8A8R6n8jf )F_p@4ul;G1a&x94֑aY @6A[R.k%.Ʈ؝-B_A0WZ唣=z3Zu+YKسzS}\d;1s-CV\K K7Z${L<#Ok'iw1-9JѪ` 1zMMhxLF4*PQRLQ0T&G/L*cZ!tE N:=}WltSr8Y?q 34e?qR2_X̂U b6<!D"D 'o;Cr)GİԠ{Қy2DsDD$" i!(OW S:őp.uXEq%VL! "JFc&E4vNSJ:V7?\^{$uf 86b\` GՁs>`(~cG4džb)_\;<#K?\Π;܁,݋H#x nR#H.`H5ZvQlM83<`k U cJ#68R$#?g)Oc.*5XMn{:gGUGt( |Zsb<աsbj馉a}RwtxBGτ8f *AToE*^.I)@ #)6gF H>)1&OhdD+6-_n(덯IQnX:\dXHͷ>3>e+^n(c//$r9^_C\2FC\1`G$eⷡUijl~->dZzn ^Ãiɝ({?8~gjkιU5. i=K :ejNpKp2_3^mJ H_xQE1KE^HߚIHy~ 7^L.̉K9&ðU'߿~Α"'%1̋R2(HVfzYγrX/Uz] Ni~`배'Vк"&k/[+Nn.fo/,W{]duj/ˍ-*&OzŴ@Mfcx2,F{ dUi?&UT,LoPv8I}s~6fɰv8KM _{U$N Ŕd4*]XAy,̓*gJ\ɗ3E)a'0<`z<`zt;?,hƄ&"bU2ڨY.qІ{*N" " YDC"¤# AHRVa S띱Vc&ye4zl5JDt|dЙ8[1k8-(2J 7gJKSUbtZ-<ߢ }ez܏d솪]x:K.ݘoIȵc()IvyVLӚJׇ`Y!kBmoǥ!*b.߁Sν~WןݖG֞Jg4{>~rgWXNgr%J56/j^ޟcL~;1baK <e#7Gt0ґ!X0llk`9M q0@GZwoWGCJ9ōJέiU"bK(F[ )R ^رޙ8[u twa}=^V} ^a*ןN0 *ESσ^}&PgIӱRkk stZr"6QoE(VD3qLtxx K3qq*s<Za_|<C0bQG"|%t̮1h# טm'S~++1Z_Yt~)\} ~י{L,c05IJ TQ͍QmQ'(߅^Ipu>r :ʹR?۸ړ L&fKըzx`$FM9GP(Up80Wg &I<䑑BQo" J)xP齕[Ә("B Ofjĝ̿yjz#tD$#FR;8RDDLL<`8SI ­Rii !',u!|RiBBX1H$0NzUy9>!ꈔ$xP7P4v`bECpiby:~TlTF{t)i;6$+pm!]"#.&X g.+߯jHQ#rҴ1ï{~LW u?-} ~nzڷ{ xlJ]i |ۜgDz{k %{!<x`s3e`n90'[ݜV?[{NOv9 s"_e6VjZ4T8mRĢ%%ϰ =b"G, xKTn—[떕å uTLd3@,[^Cggm'!;r;_ry?O9y1ͳ)Q]PR;ﳬ{;[JunFhuHk^]#C5, b1]VOѨf)!ʶUl鿾}xm=k=V.nvNlAx_*>ֺJKH5PHe B*R괽 \뗦=.58}^#`+)*Nv)IJzQ^l&KLbr|ܰħ5O$YUغ,urS!VZ 6-]%X\&`Tଉ1xlJq=<=C<;qFn+%ۜ~zz΢EA k9 7`Ph#*8甎%*x+"D'FW+5F+DpEW#DVtEuŸ^L* GWLƨjIUQA9Fk*zL]WD vzv*p`]%v?$Z0J`$ [0%HWZ]1)]PW:x^8 FW+(Թ(іj2&^gpX5֛ Toէq+>s9 cssk}Fӑ~vzIlJ~Jgm͚TScbUtX5)ooR-Z#̽KB+m m1޶]|R-GJ-?wK% M*1]lubL %JBbmc:+Qjhg?"Hꢫ+t銁+`isS2{Q H lt׉S6wM3%r tEqAj2拮+gL\&KWJg?tǨ+oQz9S 묘 O J]PWD֛r+5btŴ.fL]uQQH`JpEW/FWcLy0|t<`n8p V0PtF3]ac銀Ѣ]E+u6{]-TAt}銁+5JvG(s{ 2H> S(c;, `$K0Z-ҌXiC&Jl-1hA"`]13dǴ+j9/IW@< A֙uŔǨ+O+tŸCgOEW#ԕ[ÍRtŴ=}t]15EW#ݧ8A"`]1.)bڡLĢ1*FQr3Hа/]1.e]1/C/GWvǪ3P cWIV? 4ڡtF2[ote[Zg)ebp)bZ )+rJ[Ab 'bܨhAuŔƨ+ߛe.x&E{ >\&?KZ z;$`dx&0II`wLЕ+) r;R]T$I܏J-g](f{QKϲ{,*84qNH(%ehi{Ӈ҃*C #Z\0tE(1b\+:w]1+cԕtEZi1bܡbLJt5F]y:H+ƵVg]1e0EW#USrt^βƕ,h,aJEW#U^8ᢜEL &w]1.ѕ۱Li<0'Pm*̍ymbԼqCO^n4)ʟiƉ4ᆃ2괾䃓OjXm]e.6#g$qQH(j=.ufI3 FW EWLk}bJWt5F]9g4o! iz`>Ҙ;JRtEVY1b\3ʴ&VD[t5B]絷tEq++jhu6xV۷3n5][__.fs߫k:ެ_zut;Qk/_M?v4/ lk:cǻLM?8ݒWG~oww >3@Mj:_On6M>hd߶|TO~hs1>$͓?kx AͩA*/uu;Z˞ Whw]GeETYu}qұڋ'WY%;/m X껅_9tAY OFRۿOb6vޕ6|ӿ_m\l6秂n3_L@?H[23ƸAbDTO1%>xk=}4ܡ=H %Qzl(,$*]=ՄtE@)(被J+qAKuŔ]PW1aJsgeyr84kAQ%z'EDk]Dh^RTpJ꘻2=3+IcvAW]1s?yt]1ʅl|9}pdNt]1eF+O8Abp ()bZ~)]PW!Z FZ93̈SwRcU  Z#EWL 2/Uܱ 2v`p3CwhʦF EWz ]p|yl.]1^WLj2 sHlC1"\#EWD"+ ƨ+DtƫD\ۙw`h1Bc/1RRaLo'l2R%)QW.׋K֐X1 {[g\ք5V]vu~}!^ʺE[^,Z`ElUW5٨c:T]|څ{_ MNzJK([*9ǔZIiz՗p U{w?zE? M.7Џ ~?]owӺOO7 c=yuU=⫣^*yIL VN'ᡯ,M\Zx͟K{}#=mR mx>j6Gr+3:`uzmG>/|c#ɿ>ߺ@[bh޽~v}z[;_;o&__?]\߾#>]]C}HO5mC[cwzHt5?>V$!T |+,X.(NYԵUvTfQhuG?Dtў-6>LM z㲢PBɳa-,u!-amhB,jb]Z30e}~~[5߸5K+;-g/~^|߮}]amsoYo7_=?ƺW{đ8hV/|_mgͧb馺8sVv;oOǿߘV'S7SھW[ߟ~TaM7'b{\\~~ 3nuprl}U?}(wv]_mG^=Pen/uCǝ|wvEY#ɪ6]_śIWmLG ϶LM}Qn1@Azsuu~Igyh+P|nڋ@|Ϊ84z[ӗd}{5$+դ ?5ߩt~C*L5~aקīydwO:2][or+B`G{W-pH^rArE_m%}T&JP4ݴF޵iTU_uuuU)w?QF hN `/1쳰lzy_fhDk;#qq}t7wR^w؛=Eڸޙ(-"ʨsFC2[ |1S(:K7BQ@,sWp'C/cC'l>.W_شj^'.pjBu#AO{AgqAB2s:TQ$@ m^JlN.GLr =I޸:\9"ԲzC-+0 ۰ƒKTp^Q #nTІS`2W21X%Rgz&3g06skF9$$XR`}HeB~." F(X 6F퍷cϧa)}ZiGšwB7Z?/z6}ZހPs3c`s39O`aB^|(@g5h;W%2hyxN .(ݎ0nd7 # .YBcS P,Zv[ $ #Ejdc4%+g`UldcLZ|ZlrʤF &R{#QK- E֩(jX[9((Drٶl]09-e8u_Li_J"!b+YuZ5NK> 6;tڇP<+\"$$"(h $j BA_YѾh@Òdj0D:cNQa Ƣd HLZd>NIn>؀ll1t=-G< Át*y)yB4yJG1#l B Zc}}Σb;A͓bdXp 9)rd[`^)o=SFn CR&abHBYt$o"bvɤ.3'b@3qXqHCf>ןreUeH6dy)vP£gjwr+#_7 [ۑeSGd-Hk{o)[mt=-%>@J}YYRO_~<+OG/?gLlˋA^zAH߫_~=YU;mD?F?&yۍᮭ~V3>\'i`\%oVtj6)$K˱h3Puԙ ~D*v=n,覫¾4BYgw>yzt}h!|Km@p׵׏d1T*돺r[/Vw9]x:eܲZ[>˝sN0z8ᙏٴPdr:^׺Y]w ?BWz7X`͜=j's,Q0窔dz~³l-Ԟ,]񯻛y S|1vdOhYA3JB2e} HJt;UϘFI;Izgisp0JT YWcj $r"  {32\!6M;n,N^+]o{P eo?^_1ݸ=>'jwʉsv͖0'rZǨ,+F 6A,l1a,Xi=Bz.l<`2]ԪZY6$ m$J5VKH7e#ޢuU3p1W zx =yh?Z~iΠ{MNC3pcβh~oCmjS^mR~9ob ؁p ZI݃oG~JF#}"!u,wj.t,zDJt9QEaΐo?k 3f@30jg:dSPiH“/⽕.)#@1"T'P3]'"DϿ0]`!|b2 &J*̶DF}Aq8ǚB) U'~o UL;巟yoox2so߿g` c4֚`nm?oeM\z_ [<qG uW7y./k{ysSbrnm_Na,]r6kW<Ȥ+*X|a~ҪfUhUL3 '@AXW&B}>)]#] pP~~eUy[A>9Hk^csRL\; $Iఀ=_wqmGÓb>l$'A_ḯ/^ut+ALĖALpe۫el1fVi3`*\˞2.p"$I[UI$%d3Z#/ׄҊ̹]D]vr'}+m|z=s2>`z}yLOu16I>D}P|2fMEa'0)Z(kPXιdH:S\46!VQj>Ef8P QsőtxbCriY`y~ 9kmVͷ̏hqVu2)QL#)O1O\8)Kqvspd<0 %VڒhjjEѢv6 M&y{Ϟ~(IL8}[@b̎2&ʽQ4Ѭ` Gf*"rM,7}'63'L:#.vu'z}U5RIڝΏ{n6O3dfuXե5؁wl.&պ{}Z>O"N*ֈ;61< ˛kCI /x2D*SY:v#l8sHg9}@VxЯN}xV1+O)6~ǓIl?_Wɻr.TbV9*= |I1}Um+FL[GBo.D +.I)K*JblShEHb}y&&r(ɿ$1OS7;y< -GJ;T>@0nRxl g ߛrwB}:\P-ot u鹻/oͮ)s{Ex(x&uRk*}MVeaxlv%(YML=CeÃ}o;e]9و+Jζ+9]-f48z]8mWY23]L\QaoOKS'Aс^Sz̙y4pUV]lb=\=C2Tq~A6"ktV5q&/6eg99::}xMv`/[,:hlsJ_F!V7QiDnv_͌W^+tJ|۸`k$II*zL6BUNJNw{Uķ:{[T- \G4/g㙗\vl$e$awh4a9SwTB !0Б% ׉EV NZ:Ü8w9&zQ [7o|rioVEZ Z9n "$!WQ o-(bC!Vz!2_*V;p}նRbkv"EEZXu|P89ޗ>#Clv^T귱^Xm>8bʶ_{}[ .E ywn?ՌiiζTPd <7\n2IdDB蘍qA-B :*͛0:RjշH$A09Tk;eWE_1,Қ VJgs<tJT f-Ok zuK=Ƶ^rt"y e"13_W <˻ Szʚswm۠NF:ɴ$0jc6.:Gg=˻ vs;ʄ-3 p|NSX/ &9(5Qr=CE@4 e{'A&:Y=jp9N%s= q"TH'iB1첖 !F듕INK$Z tn29[t>#}'fWyNN$)9O{ghڡ^,9P/rW/r2A/nEP<%$/md<2*OFP{`V#V^1S9rЋO'5;Ea T؎$uj |8QFQE=7D?P#P!mFntײf~l#-jJn+ЪJe9@u=>eH  G($Kw|j_6:a ~BL>C=DP&:3I`cLzaɂL~a/(N3P) G)jc"Y)PU' }3JqM("︬XeIF-G&8mw.>u&;H3eR5n:&151@xEBRH㌌L#"~3\[> BCQe(jLp&˴R) cBw!tЂ2C  ~`HG#e:*gmRJt!Hcٵ/c1qKߗ=49h-2.PgWE1T^)"j_RqȕbOz}ڒpJ'fc+n~yNOFc ;e$LɹQO||J$^)kjI87Z41_ (Wl)nr|>Ϣ8Q*-G/b(l1*xSLmHQ‘iDggG%h@CFse2߉߫vٶM(fk+.rT Jiջb㙚E]=M׺䇪EupvWI\+y4NzlY횃EEŽ{_sBz2'.骫ݬOCO*#G1M;UѢ+k׳*`npD⑳&GKi)˝&!0vv8*Vb;KQ$Ŕ ~J'%/^8Dhcdv(O.4E> BNJWfq뢒L$^ =jjU8˸uꬎMX ~PZCD-袭4!ݷB8uc* "J74n\gGշn)l;h_T[Z!l<vPǢ⼮Ϡj==)p67h-E ȣGm6\jO##"$K{j(~9=%,F5+%tAI'd1sgcijDDdOKDp$e"L%u3cc I`I8'ɹ@l_VAA(9t @6i(OvD|8i]Xb\Q1(UbX&DaB9^P`5Lh+6l2{o}..qu. ALͥg1&v`d\iKX Qp_rxiML%$/:99~)Ap6x VE(bI*⧉f"S))b~ǂؑ砸H<&F "EuA& h[jDz%gٛ8{J͜t6|OܹOF]=l0='o6ڻ׋^YG As!%\tDh*YH wS@oqjzJ8%`z]Pvh(A&U:᭶&H]fUҨuRKK%OBM+^dM&,{kCoSLQv`YhYϲ|Tc/f@%{0苆=PZ_$R~Taf@3afzɌ;&㴹-\TBWU'"KJu{s{n*VFz\if5h:m[~J?5An>f"AKDFFrB % #ΫKVDqI bp=ڔxv}\\_}S0lmwM?*"ijWW;[{ DJ}z>c sZk8NPaU2\VF"d,r>14“hTg5,ߞO 6Q%*xb҇UL$Pb pq2;GN; 7qv+ nkNGY,n9"y06wkۛɫ[",-CCz(۲b<'ԇ t ,(!.p5Jc>PbbvjeeHoILsKuIZQP jjBBsF}ixTsuҔ6r1)#K'X1,` K~ޕ|A5FǗޞ4^(L|x%{󨗰| \Q,C(بwL[~Rc{9'JpF,Nm}$tPLk&+z_v@5yIkY%Xə .+_MdW@9Xv}M<ݖ7شb7]v1NssyW -[hڪkZ3)(j>. B]*b;'>bbOzcdNo5HV̽J^ŎuTv9tͦCQ<4?ext˿)1aHԘ}kwmny'm'YѸ%  -ަ=WMd3l5&P/I3rVF7}nGUR[ !Za'K8i1C▱@EFi=?EtD1@wd g~:iBR"MJ2@G^/J/@ˣ~80Ţ>3.2:P\%F7G=~o;R="0N8K:YcFcE`*muId69M{ Ohc"!dNScp2uP#ĘDY &gMuR9=#^.糛*_qGlg!ozu 6hpGW. 63dmM7MOSrhuV vJ> }oLϵ˺v'=>t3crmk5χ̠^[d{8{TuG ʑǽkO%rjV7yy9QW^ AXkprlsʓ3UpUMr_bsTG4ͭ(ZZiIH"#]HGd0҉V1*9./_+h&dt926" Pc83w,@ZXh.P፧:&5.pEphi9$RFzos1d մ~kϳio!ۊo {vsqq}TnU§ xe*%J0TgIiɭ s?[sWrtIշF5.4w=Bcí=T"{ķ~\GJkb-v\Z] 5 RnL^#j|;##^)>}7yvqiJF<d]K!gr˔Nlr1μ xC~MʧϠ]gw-q |~g5~uC~ @wg-^Kp*x TWevwh|' #e zTCKyD͡?S1LWL!֯r\8{2b.,͹ Fz(k J$m悑?GlK̮_~Q_qϱ8yW?Fv>'MM&z%t; э<غb,w;\uL-RxxoZch_.ӛ[hkS_- ;Ú,O>K3O1kcu78(e65d |ױ<09l^Z5Nu6Jdr'@X~3܇,d~[o'PKCˏwMVK' 5L~٧+zp3l,>ŸwdZ9,s[C˕9 :SU0#x97_Fk!?U݉߀_ޕ+w_OI8>p}vjƀtĞY\=R.Bպf0D&ngA cfΘH-0P_xD.=BqZ ,^zŌq9{ZlyZ(?{Fd4!}G#p<#k(a&/8o,R `?!E{јU>'Lŧk~kdl((!<0e$dr%N\tH :WdyE̵ȵp;E=ITFus1Y(Q2f P@+i:.Q:3b 5N!<4R5a&F82rP9smߢfR2ؔÚ7X'V͵؛ SzkäHLznͥ8T̚j"K_ `VY24-Z2$S.DΨ0Ó5%_V DUvKr1{T5<7{v  ` ԃ4kUy6mo&<@`1gQG Äc8K[SkY,xDl@v ܢ|hZ92zb`5HWYowF=O/:uq-G B [v+ZqJKWpY^u4E}G`pgEsyupF5!1#j1 q@]3X.lȁ ^x{2h6[) ~lxcH d39Hv&订Ŋ4r|1H t"BMȘ9T 6"w`7֜]> :f=f~ n-B% frFLǀ0>"JP8r0ɇ n;@vY,첯 'ֺ $tPhv=!-ȗ_5XP芼у^ 4Dm iݣچYjHty9Xp֭3@^TTc[dxFg6iG+.n"Yԏo?olGFb3MH6[ey$Atdr6?n;|1} SBڈgi2} \[{#آdETj@.^s@ ѨLj9wK|\yagu3L@ @R{OHki.ƙ}zmê14;VM$-Y}v2(Yu{[]Z"uV-3x2Bi-@?x cAo"vp-,Ua)"hz3 ZؖNܝoh-[` 2= dtՑp4$F,4"fA)IA޺{5EK_Zf*ݸ,Qp`_;.;31AZ%`+7q7oo6mZyHQI ;T00-`]jыqm䄴 ]0/߯ V,:: kiVZ gBhY$`4v/fcVX!|Ӫ0c `A[uR˵2kTrlÃS!^?.P, @3z{ `}v=!AqlU|O\}ŢD}PɹOH<7Uy95dym5K k5p (!#"&FXz), Xژj[ vӥ0,Kѹ[_FjVz@y7&)n$*L6@ ښ\cBv$Z}~5(}ZV/-l "$]FCX0m@r"U'X&ZFYZ V+BfZ)Q+*-r|d$NU^VM9UsRCoK9K`en;VPfdUy'IE qetbقХQkRCX~AF/+z;DŽPshgtuKAdŭ;b DxMe Ym=r}?|^e#~#{ D`x+YlQI|P6`4eBNeWKi٧$hf-?;kHXNZQZ8K??n*Q#&⋛ `&QSɏ m5cA\9HsKHe| AtLo{O,5)ن0W gL(Xvm)~۽谢|:TW?p=wZYލ`[b>d3d/ 佹˿CG #u=F=4\bvkgO5lv%'n׍ݛ_msx߷-ڧw EKoGG]?[yo~8!in[~`oCs?hҵ3^~WY_Vc}w\nkǓ=@wgfO"whDbԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFhԏFh^M$䗤}r75М9|5$\dU: Hu@RT: Hu@RT: Hu@RT: Hu@RT: Hu@RT: Hu@z:0S^'z9: 7u@)xcTu@YHu@RT: Hu@RT: Hu@RT: Hu@RT: Hu@RT: Hu@RTZt@_~`}?'rTz{~:N 6F??eħUq4Z_ےE>%jgNor]=_KҲW6{3Vچ_7Ldrs}MI+/#u \   UkK,4lԍhj˝x Sˢ\\s #>_ޛdG~\K)[fWc3_9'O?ʽ79S7puc0<[uy!U;f*;qe_@cvwOw7'M ]zYBXڗX [uyks= .JX C&YϘpԜm1«Gxz6>0#ѡq=&_rT@@(PO$l{;o9?Onۖ/ƐKFkl7ӗZj|z/vR\k3pv-ĚrP)\.YR=gc >ޭI8;wo3a,Vef6@ۛ02a,>]˄i~@ d~| Rh[O3-/o(DZڭZkFO#$F lfSzL$SIJsy"rf9 8IfU]5aUtW#HaM;9_蕓 BMI7?tԜ܀;Vq ,Uiq(ukdRRfZ"%U'sPS>a~h؆S$)7._w?Yp]*F`hS=ZYH/$6v(&O"vY;T hEV+P KUS{B7>ʉ2ZW ڟƃkLӧ\-`ͱGGBGF1φݎ#˯#A%qȗ9ia S"D:Јb&˸Iizd=H+-FZ}딜z(KT)\o!l *EemL1avLN/wiJ(6:ާhR8xK+;15TW%7Z1Ҷx4n *cNvߖ*A"GiDԵk_έ@[zHBH \%o>TfL62=I^Y"-(bCibYrSg:!:X';X';Q,^yk6;a d(95%<I9X2^gRr)i[)E/18I4F0ށpt]W#O7~a~2lJa<$ausj{ud\ݳn_?bUM*R=xYb=0b>~C3'Wv4{IQ/wퟺή ?ڽ8h2z|فIWcKGCm6H#ɂ[v}1z䏘yc~~N&{[^E~cx~2'Ͽz`Ey3=k.9.-c^۠5Ҋk;mZnm~\QJ[3v4[Ro!X}EQ,"U)-2H0Ոlh ޓ'RH(Iܩ\:;9/ql\ {`[A 3>d٭3FoHe+- YLgT:fQZ©&FtOb2ƏF퐜Ud&2P ^y@ES01q/͇y\()SFkK5b*)$9(QޙAC\3:NIC 9BT)UN#ֳLiIfG&EIT ;[g SܥVNȖ8z}\KjwJ95j>#*ј晈J%ʩfYh1R׶n fPc4 cdAJˁ+$re3ќL1Q+ q5r u- u<3z L׭r156LJ)Z7-: Mah6˼QvKo^v:1֢\|w_nMYIDkX- wD3 @nk]^g - M*$\HA$u2Hj"5*Uut)c1r^)gY;zFv3OzFn$9;=u 6{Cs$9juAhmF^x OxEZL!gkep ځo{B \$9v&Ss~<;#ٗ<`ؔ)HIbH% )l6Pd_;ΐ!;T;Cvv+Y *8.5I2ewQL.-ZJx\*Kne}XfxϿQ%M@ce)p#.2,iy<~*$5FL2TW*T-ң.V2UPjEaAMӦ04}^ =uɉ=u7QIjW+pZ#bJm" *gWY0Qq>Dp@HS!ȺQ[cӒ8l GY:fZ8ƔD1塶fFM,ajg ue]z]xV]UV ⑕f_gaa˟ n4<\cvhjAY hZ"&s4$KFpA5(n)P}Ti¹dVd{.rZ/jShZ{d%$ :i0ƮFa'abvEkWSM{sY (_ R035[@!ṛV7"+b-dDLD2kbAmQGǬ" hT'9ac/"bFjD[Y#^#q#@6Z;AH Vx-(yL 4`ḑ! ]D,TqRR00}2餙Ɇ4F8d('*kjrp=ٲp|ոD(y](i{ X`Y<풐3Zf \8K#Yuϡ;kqGW!?MAU&z E?jci쮥h1Jye zh"x)58=!w;$u.$PDK{|j5TK8IOi,dQSԦ))B@D {FrJ=cKqNc9\"wY/J:&DVw9T6FΑQ+mڡD&Oq٨cU([ yږCŞ;q@r4nYGtVScjR *- Id<(q4R(*PíoųmMd1p7F `L:-Ht& BB @pzY'Vyl 6Ӽ{ò[ª.-ǰ̹d˼}Je )MFj)A<P=_g_ǡߚ^kk:Fkh!iuEH lN 30P/erA㖃0OTWN:hA#0!FJ{l&lƚҹF mpj/)J"W#h^]{?ӀcWրM% 0xWܦ齔r&$ ߏwcTavB{ 6饫aydؐ 4] *? (~Yi$[)}+}aU,um'/ kdQR1 hq,S6$΀@ht732T%MPT J1u=3s;{a^[  ؀K @V\(KqtTaIEa4Oc,bl|)F}:|.IKгGJK%QŤN=U)h棑PM X+RVdeJU#35x *Ls}7Yse0k(guRKjy̌; #Ц4[: Aml`y чѸQЃJ> /ڎ 7JM*a#Vg)90|O]z1 >=\5=VU1RcM]`T.`r_x}9_?L?=y7 z`\§(Yw7hw46MۢiS7MGC.-C;p˵@/~33.=j#mVMJz ~Ći~^sfkTDsR! lJ]'Wnd'{X~~ѥ)-|:(̃=,0PJ,w!G)}ֲRdޞ?ܤ\e#Gĕ^hR2xiAIcLT(bM/]66#VAj6ϼdyARAAH 8\XSV`'VO.kρc*{m`֞2)Ca]4>LLwKLC ]M8ʩ."mkIP%̕LIVbIa g6?d^a0y{Rfv!bjՈR,S06Jʄݟ9a.g"L!n>vezwκR߹Y2[woݒ(&cq!c"{dES0\e ,#FV+ѝ`1VYE AB)1*h2:ڀm)Qup'h:\3qkπ&n:G;xz^Wdxj+2}e'3Oj30p/n< wIU5 G5J*rt |ofb^;^$~5ID처ٿ?l7i)Ҿ+Gd7)]ECE%@$K68XyV-WYGWAv4~\6%/…@/`*ЦU_ns:eR.~s ۪m}QݢѬV}hxꇢVՋ;H^O_" x1][cq?X5: ߍKӒVKʩ A؋SL"TВIO8NdT.b{V/Ҭx?Q/~k|鶮V?mJ?l1Aa>j 4"(ra=Qo&1Ev<91|բc83m~*+tfKڢ% Q=.lXcdZђX4jndLT Өc40t>FHcY1d eh#!hZv9V 1@$M1MJGn"d3Sŀ%T8 y Қ)RRarR"!0,sK#6넴Ĺ[1Y p7Gf`ɻ5UJ^M%e-5zyFʖj+ BZ)AH\HVJYFB\`XUPxdӑOg>'kM( T9ŸVsM0Z9$ Kќ%]^dӉIq4O/?$U%q5㏀Xʽ*APiGkg}:nlڝ JoB4| $γY8{3zմ0g?01tPdu B dGRJͮrȼޗ磿[t=jݩ+[Jg.auiJC9%c(S6ìz0%JWn|AL= jt|R;."#NlUN/r"aBNI`F~2 8\#ŕr7oRp{gJ06ə>A~ m40Po /tR70ߌȧlQk~X1$j>C{\ng*\ieh( mZA0k}\'\_TN҈kn5_a>%.;gDE)_"^kꐬ0Cާ0MR-PK|M:}/5peͻ{DwݵC:xO[O&YDluC5wT/+2~Ԙg0Qy- ns`UnC$DQ%ov)mN歪t# B{nq O**(0֎ 1$"8Prlϲs痝U_o NQA&<`V 0 FHfLGO:QzixZDgctuA'q_½UVᚣnR DA"N0D I` BW|A@JDQAKG?"uyJJXUV}WWIJjJ4I qV p 9p9}rӔ>!As;s$z4w}XjR7UMBO/uXMˮ|~xvAq)(ptk h?;uaWL#b<,;ּ UzX4ۓ_j =@?9xf.ܓ(KZtEEe^߅م |޹ \o;IʾY(̚* )((󤘽T<7@~?>֦8;Ofvÿ{c}(mr#xսZrRmģfY7M/_~py x*Уr3[UMt. UeV٪2[Un;L19lUg6?3[UlU*UeV٪2[U,Y !/ُ/%ӿdL3C/%+L_/%ӿdL5O/%ӿdL_2K/%ӿdL_2K/Fmgv=O6Iݺ1:1e1=;szQ͌s7/*d~?pu͸m7t~?y]9 '?~Q͘wa >W}u>F=Phj5i&U%h\(.aT̥2@JF/tJ~Ë Cʜ>|n[VQEԙ@`_4{~vOU'GG:EX%w,hZfEӱx&M8o@ųŴ-=4 w_֛7? ٜvd dLi2ZY4kf-ӬeLi2ZY4kf-ӬeL?vFH8.ogDxxcYS bN3zKf-5W4RZn5Q{`,K@-]3t1^u* uRWnN-"$4"H8\' ` t3tdZOՇ˓cK-a5T2^R5S0momF=)$߭l=LO/鵏3o#o?|:}hn>J{l$4ԆSHu-Y%Pa]3b]3IX`yKpv ˸J /nzr7'Rk[ls\{54щE-:mA篣q+qB+J&4'ڋr.7\TӘbA8 p3_lW=b9A q?L7jvMavٖ^ nX[RmhvCjgIqoa{͏=tEvLrm+ Nw͖x5$nX/Z@E$>7ұ,tvds:Ӈx,wѱ̄3ow{_Lj}KbL/jv׋y5?:lo>}E ^`Znt7  :wu]=y98xyQ2C_x-EзLNr iJ` ]YP*Ex O'p쒕QScXA0=:(#L`ҪhBMͯ&>e61x>)xkfe 2h&&{7p C>csG $`;#!HX$NU9ZQ! Y# 8 #ܔ. `zdd]V+,dp}~u68l'px5{W0<9ʒ+6vQN+'}g$՗Y܆޷> ,f>KWfkT]`p[/H]{wzfN6ä[ܸ^Ӂ{m( ?/.ƕ /񵔋-hkil{u$JjViJI^ahU0Qg^9ax<`ٹzՒiKVɄWMyexNEi-9%c]) @Yd^zQ۵ 9b@T8`du^HE%XZAz@._[:+(x4STq>0L5a]3]XZ1=^=^Մ֒fUEst]rche&| o{z P,wHuG{SzRkػ\GoK9~x'˨5H )1,#tej@$2qLu.3`ӺG0ͧX̻}.…\ ~WmGKЎ\bZ˘(\0+QZa ])*AGoXcIp:|̴C2vIo}sXv;PJXd eH+"fɎTLRtdHd7QLR #"zL$KYcmg<@znK.l3CmD>IH$P?D@.GR@%@o,0WS}i)FlR!9:љ@Phb~[mPxIÿ*d {9Zvqd!jkBz_Iۘ$6gܑs1#'FKҞBsJɴpXB]M"|"8U`n'!:G6!՗$Yr(*:z@³SxR:; Y"4 L: o&E cNڪ`+GN BOkv~LV0 $^elJ)^42#r* XЂe!(}HlK(#EnV:&Qx,EENRJ x PEP[ֳZwu k{2ߋu,a|~z֎NڒO~ZMs))1Egm4`sz7fF;-}Z3v;Ʈ%cB yj Rh@j|yQ7 ,ŃMI4 Yl,& T~-c) c[ce|+YVte_rkʕqת\+CŃת\3j⇃T\%ڐ\dtp-?4<@ls9*Ǜ!7m >S}5¿XϾNx[z8% FYLAP&L,O1Md,ZIk{UD#+@\S'HՉ!yeWf997{l(ozǂz=axdauq=[TcI=K!% {aϽ_Px`A]U}g 'B$mt41Ԕ; Ah2zoUg:Yn,𒑍A'@K@Iix6Fq0ɬA8_y]ewwri1;+FǜO.lC Hk k0dh4EB:YSWY 9cI6ke4 r3]F& "iFqk+KujO~q2R(/BbIzōH-w>H9h$# B B:dGh4H9+!bgpb(pa VU›UΑ[bҦDG"kKcXK9 $ȔiI746lќBr= 18{J|>uwv:k+J&̔W(L*I)$0:& km$GŨvG,/;i-U `J%Qv[IQ #ԭyv奖u}}5~i5S} «P VѧZPRjvLI*jcCFqFmb 6$=_nӠkV\7 ~z*}P|;77't'sE<è,A3è,E1F}Q# +ѝbqI੸+YRJK2zUɸ*.SqWUZCwWU:7+ ]h~9{XrG%~^!=! SL_~@A?_nnC?Mf;L(>hz)9d7*zu?i W&TSm޳#?b]My ~ߒS;oOn{RZca]^_-vU,5U??z^Dxg:Sjח}w#.nwgd1Ayn !N8 )TBĊB~s)I#G=mC)=T]*HʙW_559D{5¨Ek{,w2+/(q[}:PJhujPPN(;`hX.wWU`{:Kt*6zJ)qtWo] RMʓqWU\⮪n7,#zjc)* +wՋN]vJ)atWo]QvRM.kWAk3j{57{vt>+¨kTEImt6GLƔl 5s7W)"a՗|wޥbK y}LRy)H(*z%mY% F'!'óOZOK}Go_]]<:U)gJX/,}XCĎNs'W68vCTJm d`AO&[J;*#,xhӫtvź!^ܖJ>U]׏yzvW~rkO{w*U?7x_ lԻw/ou3_X@qѻwR?t޶z؎H땐!l6S4)t9 u@;b UL*<6Z6hmSJ 0-PΉVf1i[\șb>`I56oG5o 2^\}?gŏevԋUxrh K] 6O]gD,Kx룮<|qNg[&9^: f?!x䲧nzOf}Ŧ3ܬ7WW[^PtaG·N`b-/?zxbj.o9t6/c^rӷmm}NۻeG}oMGFnJdsnU/r?M8iBA 1359dKY;fIItIdsQsҀ|mؖ4b!F|VV PDAm\DSY$1();SdPw!L! 2ՠ9]Z7k%2s<$gKr?`5Ux3G@zEn/ S~20n|ciČx*pEZ֭Fz XY<pD{<4ƻcw}s Csy'*d h,`\}щ]:'%%a1M_u;S||*\]!Si'[ۆ{-89 AҖP2FT`^+pCK AGR =~ޓ:\ϙv.g&@;RW04?xlo]G2EgF(; S1TÎOˁ%!S;W06`,crIh.݊Y"̅ Q#)VR"i@r'"j&eB˒H8dLÝ7P\69Yz^%djtkr!˻<Lꥻwi]E6YS쭄DRE2 Rd`=LJ{(KS@$S2,9@c"SMMPf|o%e,ƍ%rqE*I4ۍ9y>SU8J9GB%Fc90=_-5 h/? ВwG1FiyiMaqk!Ug3Tp)L#kjc iW+.3е꾈<8 ,; w? ~9i?~2 = 50)<3^'0`'r-&<_y 5UVH0v%k*A+~s`qHq@ 8_(݄*,Exg"f?@(KE !hEZZlТ( @(*|252h]sIr&(#?P J>'%#flGhqیf8T&z ΃ץ* M%4*VƘXEsHǶUhd3O!h)ñXUJϔEbYl+ Y#N!ii۶}lh(-]:eXBab)hRT eH$Ex_`[ hXRmWC yY'~-.A2l Љ!vQD(G <zh03h7p+ GP xm'􃃂P 纔/[5; ndxl(o~z@h+OW2 BL*T+J+J+ Jv߾ؚ^4w]4i0);Bt2r$G4憝 0Bcn8{Ҿ@.;m~b @xi(M)Iو(fZ% :lk y],NZaHrWhDOL7WK@ehz;RM 'l c,gJ:f&$ *De9P\MQ0Զ"1LL lfd DVȆAFA"*\|T8ۍu-ن {Vֱ)PA[E/kZLt5AMMlNS{YUw-cBmh6GU}RΟrǿBޚ Б0A ZH,B"/ޙ;kOΤ YIqJhŔ֑M{n:Vf M-,@bY.wPi%:0#ux!Ѓېm 6d]R 䲰$,֢4Q)bv#P^E 4.ls9,.@ꐢI,l$KN{j]08;nmtC (ê7=?P*l~)b*ĶDFh<-*>hem 5I ҰT1QֿRfU5H0kƆL톚/jcwð4tr ׇ7LOJZK?t{{qMv^zBX͝uO +)\\2SWj!OJ:BSCw5#ᰔdDV_d:CrN 'C \ѐ-)D='*Hl*١@a36ӌ}}i qGw{"5a>츽Y?O&W_'/߀=3&Ԓ1"2VZ|*=ùL.nh\L*fdJKaRkCA>O$8$ $ $7ܣ W#NR3S47 uKyqIMC1$Ά[!WlٞߝSK4ϬD1=~B؀:&oESP\e lJn`h:8bY0| Frh5ZР$c̨3hS:XJB`Y 7q7Y_7l2Ǯ\T{[;u; ʛm+e,K# yYѱ SGIđܗ  lG>S;r[>(AÕK0{ enSo7S:5ԢF$EVZ.# "d G O1e@t %g)E}[vC{`j'}O}+FYM܇0aGHgH -aF8B`N+y00~/'оr5؀`XxJ7KQ~_b]ƴRug,&`AOzpw1ルZ%ch!㧎5Oڣ#7J k9,78q/L:_A˝yxm xGo/~|o91T? o O -0%8> KMdKѸĈ*08v֖R2|%gS /f Xރ/݉a5OA*p~H6 ϟy҇w *u G˿CkrV{~EqX }_,)dA0+JmVL(qޏѠxm~sID??,uxq=֌5*a2~JtZA:>>l~ {څkw2;0͢雓OZۻ ̖#^ۻ~og*bCBog%!w]&ns7@6]'u}\6ҕ:wtfr?uuյ}('1UE[R".9Keb5::o;I]3SL4 y"Wxd5վ}/z@4Lz#9ɷH3`vxl\<^MV?$P#FTY:w4P3~۾1U3LT'%--.W|w_FUu߭gGP%VH, Yavic$2CiD"ԲAa)Qz:' g`G T0aL-*B1"AB4ԌYAK#iޔf=}@.Ǫ-vb(%I5Opk8EH+zw _Έ4{lt؃k8#Pi%3Җ+Y)-(mkc:Ncَxpq9%K_}7:K1Z%R zH:| *PJ JAOCi Q/{ +$6!`BOQa veZ&7*/5^#wYӓqdi$hjg}w":ݕ*>^'zm2;+FpǽCETAK Fw*iG 1 },PV+ / wD@4y%*"Q* J3 8APfdnp#XtPb#Iɺ7:4Qak #;0Q `肕EDG</2D K,ML$$(k`! 1X}FECADbQnfq"|uD7uJbjV_~jW#`7?]TEQL(d׭|]0tZW>IfGT=r<,.թ}hOkަb~^8]F`-Gc8IӶT j=>>81Ux!U%1~sI]Őb-LDƓS+|2-:/sx:wW ;K%R*:*&Is>\zE û4{Wp~)  '0ۣ^>|ѯ?{uo0QG?8z;Xu-0NԃEA83} -Q4|mS.^>’XCZ[ Di㻱9\b-Strj6A~im-׾W";5$)=Pd\&>N"j'1$K GQ(̃=,0PJBx0ԀK+MbpQP4 bCL?sN;oRQ -h"'0`r7N/:E4mLܓh{Μ9!;OvΕJ' jUB!G݁'wβSVyښJ3v>g1#G0EI%3ȖXR?p2s;@QZ;߇qSЃ;Y kOɑ d 8\XSZ`/Z ty I+%HUFX{Zc:?|O"ú2oP>TLF+ tY) =&3z |L¶̸v;[zdgTo"E)NfgcLJ͢0yQK&RԺ_ϳzͼ$J֨v^UBN.UngϦWb8{+U%Ɔ5(~n.ޯƊΛx8^bZ߆qՀ (~hB,"-<{ c[h,&JlqXr?;ݻ1c-$[W Dz[*IKĦU W@\o\%Jj-pEXJp$3+]]`/:P..P.[8`ֆt@|SwMk?""H2ECU\$ ,O Bo')10 aq6iI`n \%iIJ3\=C1Lm \%qJjpR \ .&JѮWIZ6D #\I&cPr۳wĥ|[*I+6!I)igWJ,ΕѤBj7]Y%ܬI'N)TGͯTxtDXx[UɔrD"GΰٹMv;>WD>.i\(y,^ⵛƏ/w}щeBErdQE5\ TF" " YDCKLau_im5xo.%\5B;]n*m\% 9X^kbŖ9~ \~p=h2 kwMx2YcXIw_j%f9BΥs)\ 9BΥV\ 9[Υs)s)\ 9BΥs)\ 9BΥs)\ 9BΥs)\ 9BΥs)\ 9BΥs)\ 9G{Sps.K!Rȹr.K!RȹÞ"nl~RI`=nI\5ؒ~RIJsJpS/9KNS/9KNSd?XiF=5Q\q9` rm0 Ě)JC,It\9!Y熪K",&pBr$ \NJ/ ߰яnM*ðiۃ}9[xW֦% Q}w ywy&tv@$mov~&7E"|Xp##BH+% IJ$e4*)Zn Iox2|y{jzz,t -R[5aL~aA s.H@R9$:]^'IpݸqN7 Z3gOl9RmFP 7v֧rkB`Va!DYH($qٴֹ3'uz1rf}x SX oa9|̐!zАF NJ\ WMffe'h\ʈ* aq-" `BK 1</_KXCx dP0wtf1|>1ړϊ6{Z}ʬJ/_Q7aoӯ+|7?tBʒ; 4- ЩI% »r4~|~^4rݻ?[<|upZxk&)wp?WU5.`N.Ϋ_o5L]󋳏Ѥh4? Y?:u!f|0J{B?.brߝm~wׄa L|lu+_T-=$ѭĠfκiˍb〠]vlLy,t8C[I]sL{${S`z4+[܁"mCy~:@WV9;$}{fŞ>7ۜN* T7yU.Ͳ*h@Ֆ Eß@QjRQ#E)Տ#P5ߕu }P*Q >^n\vyȻd9+}nٱZ248ۿpZƊ'75j]\ڻqsn[\Y^ӥGw;Evt$kt6嚀U^nVSjqm֧hUs]v}}Mk-wD_stZ}V1duꞎ'WC顛]\H[IV֡\'R,5 |oZg'7d(eBJ & /!0B{8aUÂsC{ N5 pƂac{EZ&8yd% 97GGCJ9ō8`1waʽ3ø)aNPAlGR.%լ~k V} ޕ/MSsT>YmՀ>^ec~]%y`+ֻid -lŊh&zz?X.uO[%4x,Οs9w8[yVh@ȇƛ5~SWGno?>1IL,c05#^b1j 3ꄷs玏L.sH}''J< 'fziv qdpeHr RQZp%aT"E)>&ݎOΥ"ԕ{:nL8GaWJYǃH 8b:DυрYANr1:-hjKu+¥&LIxpZF | \˴{Feg@a0+suUkIFSO~1a//#$Zu}HxI@=p#1 y1p0FX:(B "ʓbVV)4FƐ:>z!!J{P.V;mT Rbq,;WQG$[CxN *$iŊ'^YN,rn6c\*q:p甴a6X/Q+6vW0 U~j!;T+/4ĠlHu;O@O&րsOk~^|)@)U%RN --XS$@:^W~T C m!Z 0$&DdpGcuq 34EdH`<Ģ&B`at+ ޛ~ AzM鎦 cx/UxB& @q+$r&"/Oht"Do&\|m!j?D3bGdh2XbOE @"JFc&0E4V -)֫ÿ{sz푠 (hR`TT1Q ֎9rq>`hXt1t_jMAa-h+wpS!p`M@!=U!%aWљM. (A:Uͫ*y |:£H-\Mw_'b'OǷi'y:q|o髤VڦC`d"X^e`^Y9!nVW?ԁ5,-/$vSF8MC릛!M [LåmhV5R}U\iT#oIsE5G0O kd:/u1dEϖL?0+F|zvyp_TO+n=2'#˘PR6J!^ؠLZL )=-{:II-|'MRwo'e2$DrI"^+p5i{55*Cz Y@SJx->c&uPkv"aMu}et wWOK*f+j zɱwșHj|2tGb ܻ,>yMruP@RGTPJm|,Ҧ8& Ę #z*6c(,!}=ꃓ6ԖDI&O U×{]0+}imi0)9ӌy 3=l p8n۫ͻvҒMd'nv_V7O癷ZEWk-_=hkm\'iDZ@E_#D4y Usq.`9V,qpuE$"_hY"M=zKaeeVj/z0zG-?32jTRy=084p V_he,:uq?I10 .f`rbqb#.I1Ջ d=X>疞2A0ɵ r B($S1f4j|sWL#,8ZI}礍sN)Yd"f:^l ҫ:;`A=3 斜gq=BpVY<ĸB9 F.^nQ坈^wΞ%K70T6"H.fq8 d |83a[-$˔ղ;`VɪtFTNC!it@h&E cNڪkV=ԨY%n7ŨXvN VTګ[($g'gNV7,.K8˹VQcQO$یtCV %{{%wfE:$)/'4lDxY͑_ bM\,gd-+R5&T]Ԕ9TƂAfBN5MA&r d1h h3 2wQ\L2+ԡA8_˕IɥU#)E\$.Ek29rYȊڍ{FSZOFYV}ҜN9 {+FǜG@beR^G]2<Cicv^:O_EǽJn&`ƒҞkb(SL-)HJ"^Eo P:L-sj:EAY/&- z9j}{ɞ}{YI=f=DF ZセnAX|-޿ؖ+i=\C><~*M$¬*&j'RgL-r+> P5c(1=)`@eb^D" Τs"Fj(WfY Q(+a5sa+Wc_Dʈ"Yhm6Z;EPz*E\I"pS6I]$5U"gL mHn6\F"F& #JʈX͜V|L=YWrռdO\=ʁ 8F&;IKFgC$YAH`Fd"䰨Z<ܳ ך_5x=ucbEqEp#~Ԗ+~vxO3ت#{q=ZV~ÓOh'vxzNTRbѻw>\OZ`pUb7WHPpBpU Fy2pU="kUX9՛+ FRk v}?ْ+7G?V.D #+"⏌74MFz7T{d;ɴFim4ӧh^ \ةttrM0`4M}>Io7I@2A-#G^TsnrB7cso槇7 +~+ct*tϿOB:OK-sKw1Q ]wjI򜷌DN\-nyfw0B֎F]\c뱝m+WλKg9p`]8Yvstn&ͷ^?MYn ES`f)ƁC!9 z Ib0 N/G~x J^ !ϋXY!rtĪQ 킱^*2HRlF X.I"-e qdkDߐOgghJ `Q*WTȍuHj|Bx P%#ʆIfԈڠ7, f ^$$QFdUb$$48-˽sG&ɀO[5(cQG<71 .[˜훭+魲֋MeqC'M _.r9)eY×<ןRq6*㰝8ۉ0-Ѯ+όvXZ400$ޟ6--9it}qqeXKwx?QGՇRDXS}c&h]TB̝-\ʷܴr)Ѩ<-4OGdtEF.Fܢ ~Wx7L>MדR17x[wmvO뼱~3{;%k~~c糬x>:l33ȌӇ,Vn ω}-{ RD28j1%Svdhm+FK(n[J2[q5/QrLN5*^!s".[,f O#*ΟZlDjwC`5C; ԟ!`hR*k| _+-T:XaJy" `AAݖH 9=OG ՙ{!UtGp %XZ-j*.iq]&>_tM֭ެcIm+PKjq>~Tu Um6fe 82XTl}Un-5VЇZB|x[-{=PjcyR߾v]8?㭷 Z€ʐ2!e A5~V "9cdHs'H}O[cƸե/muwTZVUo ?79D+1[IrDit٧@ c ?~INܝ{AL;hˌ&B;H#I/aZ/Y@>z9[k4 |xiE,Y9$i޻g^tZS"w&"AFDA%ҙID6鏯y(}Ѭ )<!1GM£0bXI$@VUyP; 162-p"S@"fP"(˒׼L &XAh!zc=ϡ!Z|m|dА˵z5͟:}sK\` )52MٻF#W)9x~.>Î ,E*j׊zK5w@K4kzyJ 4rA:"`$00YeJ>=qft7KF<ž~]mSw m+o~0> wޭp_oGNqiu&)4}}r^yY_OH-  JVޞRa8=p8d)qkd*ppin*YGE՛uV^}|;NO{գy-y b[IHsG@JiG7M(SRc 38)ŗjD C:`4r0bϙDELIPGk08{};7ȿJ쫈FWԆam'm٦6c-D|̴+uW`pAX186{*w<?r^ȉx!GB ŅHtTN֒*A, W8]ƨdbg(o*D :QYCC@XL-_= هѢ,oV]ԯ'z̃IG=:s69PwёkTO-IqP(*jD:QQŐp,i"Bp$qɍT9ŠZL͊rzfin<C/<1mDCOug4Bfu9h+B[mRvgjj8Ͽ:BQT"'h)9^i@ΉDcQu.#eQBHΑEЊb=J:$p,LJUR*)󥑱8[d*fb3cW,TPXW,\Kmq-㞰dvܜӣnbAm'ه+0bj"yt-&xt3NH59IpF.;ҢLD[Tn~ ۛ;BJ6Н]P۫V\,˳ YtJ @{ NXtTũ)Dq`ey29,*@ ǎ 5AԡZ3&*yRo1q6kv`|6[ZTsV>DG_ˀX_1JĖ%UBJ>3.e D)W}`2+NLȃK RM|忾1(7s65A'"'N;\udPYS`Q靰"y~s(ѽ6~LD4 Ū%5nWؼ^ED,h%iut)Qz$y.wx>0p-XM 8J|KjTe$ HK ӛx7=SlL7hͪ:T؁-LG"<[w/ <:1vZ#@bN9_H-NdTh )Fpf|q酞yhc:tuJH $0!UqExt&2\tP`ٽ*UPn'Fq\U\8ɓ40GB:8Ce1_e=z״mC66 QFQin6  kxnf]\a<$ `>P1ahe$&入&$"T&D?wڳ^9-\M1b 0.<LTȑ`L]KId~ *Z}qh~Q=هik¬{ j~~{PN_t&\5?^x ^ K.kgcMw$͏'=on>@(\ {\7y\l$xq线:ճ*F)lWە1-!m%umslvo)P,n$~2D}*eDEym=ڷ9OH޷ n:66yt5GY.*7Iu6O P6a;f9M2X%Lb]%>z+݊ld!JrPծϨ h @!Fi34+4ZGJ|``IXSbhhN !"oH?E{\x LJ3?gL2DHFtG W n#͇5O7/ څUx=G)O?5\ok83[W'Ώ~i5\0-%vAZXjFVk'0ύӔizp 9\;ҿ3#ͩ;sgT.hD'$hr+4J$wqvxL%,{(G41RHȅ(%xᬱq. IyU@SJ/&Ζ;hz@~9}FJ[A,׸7}yӛ3bw龺y1?mmgܹEg}ɭV1e D:ZѰ7ڒ@%,S:`^\[s` =Vi-yޕh^'m"҃4ZTpoӗԠ̠8H-ZYcBJ u\rAIHrң{MZрޭBG\P1Hs9LO2L YQ$4k!)O q \ZCb^圯A*Nޜ~2B8 $8 ~ HdFM`m2J)tJVJS|I}=Efnm2:ҀGF-CG+w5eivR]Ms+Gj\<8$՟RL(+Csi}(F؃PLsIm%x- (jY#jAh$;$GgZqN?M/Y ^ ݹտFhάQ.Ut6PNjtL|T\C8kq5؟iEsRuof6dr5x]]fpd߮ukh4:^\\>T3Б1BCNrj]P:y9?1ڙ#Iu瞺ղy6LZwm=FQP7:|9nVtɂ}yn[AH_O{{.a}0Eo7'ˏ!Ng-FQ+l:^ tsb9F핑{]d_}qc^[r;7++@~XqTǫ `:6x: p(ihMaex!쯮^uWW6'6T}pv/_uW9;[\cݗJK-Չ.t,ϿO߾ŏ//zQ\DF5_PGqjSnN=SW2qQk]b7=Jk@Zo>]&>`w׃o9o<7!lj'X.gi>RV ,N!|M18C^$YIF>N T&׸m>D=J Y]E:8INǾpyz~%O8*^Wb<02P'-6xZ/s4@oNuvr_gPvV'8HxW^3;RԝSw;1bSwn eQ_c*s8>5#0; +J?M"N{;1EKnCNS^bVF%BP&Lt"D%*fURuwQ,!i %j!VB@AI ާbt?N;tpe37&7|8eO~ZEƝM|vaݝ)vO>|_:$=br٢(fI,\zQBi;i;n\\pb-rN5r1=\A몦C!aIUJԆd5E; lQo,,r9sO!& U\%gE?qS@q_J:\ .guEx7>'+߆+e2rrvn`'K~VϨa\y}QU`1h7XnE 6Oi@#~jwY,ߵ|g.zr<]_B&KGBm>vq"Wf) funX6]/u16[|_ޜঠnޝwog/N/=̞l w3:I ę6ޝ]_P/,epOWz 銁j)[*;t]1IW#ԕ.֑mDWnFW ]1)qF+zyml6]頍.˪<_N>=_v՜E)΄94Pٿ 3Oϟ.2&3f.3vF#k-A,"o?t- A8E3&\cM3% ;iz6RJҕǨB3`5*rNԈ-J5+5irRWN:%!]1k(>jwVK=t]1cԕ!Xٝصsq=+E1kWL)uJyhic;A], 9]-P4,J? xfgeTgn/KE]{jW6 Sk)|ބ(]>[%[ϝՎUꑘ%֢=xXʉnmc.9:?ߖU4 f`}S xHsH缭*b][UY#g' jIZ;ρ§ۚc-F= ̯.ΏVϮMr\9-ezsuC@;ǐ\ PsRnj*\䪐1;#SdҢE30OT:ulUȸ6V |~͂14( 籺8Ғ"3eh@:եƵܸaZCtf HAK*lEWLk)tf` 銁4+3h]WDjrni0^7+f̔LJti1T+bZTCQj1M(hHW yFq]0t]1]}9,z{ z㾗# rϓj܏Q]=te']=Ag]C"`/J׊V몣IW#ԕJߐxwftE{ޏRIW#ԕvtLRC TUHWt[@ْxZӊ֫k)QO:ZftŸ5;nbʵ9|ƣ+$8ѐhٌ|+bZ'+-1:*cLCb`l'pE7]15;51`nFWVbXCS4ZCCb཯7+E=t]%IWczmKc۹3HЊ~S:1ѕgѻ|~v׮=/ӏ4Ei+7E/="`C(ftŸJ+=t]buP vtŸDWD+ROjYmB3:T@:p^I6nϾ^N5MѮ4͸ҴiՃ3%Ic4:X#T=M6yV6.ӗV%]bwP*ih^l&N>K*rt~B?kJE8_^]P]=j :Kū xNC:zw=NB[>/ueU,5ۯNqmu^S;\i{H,xSNmy.SWXU(S RvƼw? [6aVj }ys}5_f?¼}@!~?PByJwq}teY:1'l~?/_eaSZ_^`dVQH=ȨLT냴I ϼ<I{~e:Z5dPA]̏Nsy{ deq*;-D h Z׃stb }"ctPYh+QƅJY,:mH=!2[)F3aUC:ڦ*2U +ۚBQ( ((CFZ@/`wIJ P G/LcNT(TJ] 5M.*k@ߊNNO)Tj5&娴u)),IM$U-M=r)l 5f*ƪT)Ĭ&Y1ZsI5uY~m OhIT\_ZS$BkJtvK0iǏ; JSmRjPA+FP˿O?1*hjUF\w5au4[T -B.H#i@i_qH$ |HR%)5YHG$`0٢oĹ.k܈ U%D!R1<' u%)L-F/$2%dl\oDK&Rd(/hR8u+RuIMH#K] YG$j%rc! ZbsQT6fGA@Fa#,'iitJڃ.x(A*l.vA1R-6䍰Bʀ*Bj{(S'r%GYF2LDp_Fz"(B d]‚Է T5+PD' 笠XکCJ/U.#oTNHĒ EyTW6sVHcy٭0:-],< ]v9.LnZf -骅 #I`d*mtnLvHS0v`)jYu\.֐D{碦r8JV  #r~0ỡ=7fܵxS9njj>GlrԠDKhLT@=BBpH>Jk$=|P@z#jq^#úec4B?zuEr\O.A"߮n& ^q[0S!'B4Z1<*IUkzY\:\cGܠ "eնd).XZ#eBFZ{Fz@pkOҞ˃>0t$ƀ?!],Y#9=(Ws΋}05o:k/5s%f56R fxX@a33#-+A-t1E zȕAB~)Q#2`pG&3tQ`mJSq'@뉗\ .Z6l!&\ۥ7`ebBu"ʶ=)$0"pŒcL\NPklץ5=];gG P GR Y|^ د 3,&IN@wW Za#黟~:~[R ]]ku3 h#O>N-0fkH^`7t9lhpysQ|d}Iot\f= iyacO/WbZqa0=@YmYzΖmpe1_t>??_ٿ_µm6XVrkukZپ*fh#:~\sM'/I|9: v|1: X뀐LO:ף\*'HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@z: =j 5{1: +ԗ~Y뀚5d{nH: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"k):t@Jxt@bԬUu@H^ gmn y;ٛ׊ƕ>{> =y}H&`'Lkyo ,>; [}z+Ni֖חåS)y^u?I $ N:dK0ؗ#[Vu&X ɖ^kT[߬76!ڕMܤWM?wח8|O'L hzM]uyݹ$@n:fi?v=!yw3 %߯~H%__n~qQy,;" rCX'w"u.bkr3m}~ݱnuN Q"$AֵtQYxj(iK杝,d7[;7j9/}9TQ>9OJƔd+,jˋ瓶z\5#Ca)2YtPm߳a_?Gu:.{u]Űv䛿1vT8ǷS7+a)N˾^/gEVnҟg8]wR>mpQ㡋.j|RTݮS }7/<T1Ηd3-9[̶0!_l>\,,zj~kfO~A[ '\Rڡr f0 ,fLVFf:6whqϺS6ݻ )`AGՍ]]=/vH(+3rؕOkh:7ԗus}Bz|\`߃p܃s7kkp1]wZ}}X]9:[|OF޾Zܰn?ݠ|wz㭻O-tyxwCS>){GM[f]wZmm6Sm ;o6t8LJtΖ ;gxaqv9ggv8xÇ=6Fޟ|F>8?,#m?瑐wׅ[sIfh^,ވ[1?[>^+9ǽвEY-nes*!mQNjڅt-q톍y9܋7ܸ7\7~j}a^f쫋MMX}!\?鱿W1 [=ĝ}= 3S0Gۮu)tW(-7Ŕf\u֦)t]0>BZnrs'rr{=K퉶rcΖ*t[@ЖԆ7h1KmJL1ƔEXu4,7EP,l5Fmp#F3糖$}E3>]mdܸb@zCn6LSǀ8L `zz^JxUjs=41lm1b]6vHNHW`Ve~~}n=bW­ 1]]Lg=v؄䗻CnCj =(Ѽ|9jNT~UE?gpxxV~3d puK䍣l>Oy|ě.B͏w1Je$T x^hL&Gcݙ䅻{spMyH% (ŽOaz_nPl@^TFk$eJ❶BI/ S\vܒPNgIlfc34 OjadvmZ]\m]tl*MݶiA䀜yIc[壶$tĖh<= uSx4KF_ίih־G֙CXʙoDf*gaلrpx/`DWeQJ5@f͜HY:`Q'[Q_u] jX}aAnQ7y໫˲jK6}t 6p!ghcj}Ɯ6ݳ6nDz6C8 :mtlKh씍6OQVԘ׻݉kyjj]qc D{[TIxeiYs!iVR8`m-bGDhs?Bn}5SeR,dm`bޔMD0Ɣ22}1#}PN9[r&7Xj*…XUl1$7QG贬|r\j3X*2̌tȣmlӀm훨=+mp+I5K{0q{VnO D hZ7ܵ_ƭ{mm,ŶGZOnϦvS.RV]TzL_{$xbRJK-OOF5.E`24Yh$MVxNHzmp\qj=̞/) = Y,IU7endd *8Xaco~ub={{Gz8ˣF9TL^-ӄ#dB霹q9k gձ_qv{^.|b/mŎnͅöP\ԇzkӯ4 Sb5w2wT>662H!xc}v &*=P{qR\{;q[K.)0T:[eW% ^ֳ\F,2%ڹ.=S#gSy䮓Qo{qOfJȟƗk?\ګ,U !ڞBĤΒ[U$&)X]Ȯs6h[73&71tLIT<-4    "LGQYuv!^ePE-yx)¾Ɵ:J?5%sJbEpɋe5NO]AۦR=|G[1+Ih-4iHќ C?S}KrtN$ B`D !Z/d&)f[`FEq]\G:qZ.In߼"Z-k|ǏfH-$9{ЁyQJda9Sq^Ks.U6$"hh o]o$y4Um^xV<_@=y"~#jO/5>-5q<\nΚKU 6% k2R6)bqrؔ9 $(㤗=y 9vnCNІ܉ 9^UBj!3(2;( ء JEdBU G* l! 9}K\Kd9qw dE=պs U.DaNߧ` 1$&E"DKDhH< 6>СyP *Jf(& (Px)!)R-2 \1ɬjuvAiÈ`)N065}Ы6O۳Jj41DFI^z}݄;y6|d/ry;j=viz#Tw=hJ/ s*@WYg&rI}戀29Z$ .ʺQ[S!M6ZeRvA$,Ϲ$"֌պsBհJ5]Xmfl ue]z]pݣlӂ;̞TسY\&Z~4~4| go\c;j!YD- y"Q-р9&RH 6Cڦ,R=+~m mA˒?L܎Yt(3Vպsqڠ͎mM{{cjH ŨPVH9`ƓpCLI@b:G0Ά񶺎Y4`Đ rI,:5 Y#(:C*HI+jݹ]~Yhjc[h+kDk^#Dr.=ONĒ֩DsVr0'`O4:SU#:F7F% dEτQshc$&-o%DeX;wf9BzՁY .:͒-\\^/zF$`!E#HI%o"heEGhk&sV9պ׋w'֚$*l{fsǣ@CPhŤ Ak #3ABgX2M)  R٤`BR/][oBUT?Y {u^kmKe7\F 9wA|UfǂPQeg-:t"7D+RL`gIDR scb~l_. 2(Q@-+L[reN)‘'A@ Hvs_+턢{c\Toњꩅ.߻,dCp!,1f&eyri)vi emcN8Kc-Mw$c1$-gz' r6f0ܹ1P>REԴ̃^ V- SrA$cc(G6 cM\'}ZkJHRð(ή=$?X[s|dGA8 08W|0^ŒS$M2?0N|Yl?^\55δ6L|{L~\Q>RIqKdeb ! lo6'%x#,ϭ[KY9=Q* K2D,^]`Qn:ƃe2pa?x68"Yo_ܟ\(j+Xi\It5G{Dܢr[hS֊U2=8Ȓ3-,./55S8=t~͑wWt$-w5# dg3= bQ4eJ>V=;osxB&1?^:qyW?]\ԪKb; еWcم}R~Ftkun$6|etdA>k}1JǬ;~tZ. Ye%"NOxP,[ XRh>4ԍV >XVt6euYKقE,JvFjݹ]Bu˝G̓߮if4G.c񲷮\牠.]uƳԨG<5SL&i'P5Tag2g=>yc%hHЊI&FxV\BJh@@m2%Zc܁N( L%^hSC 9kn6k2=̏h+GzQ*!AL';##0/ \:<)q3MXcc +@p'c#wlg=[Q,am8Wa $ACдs:yIc5/IalU,F,1!'NR pZx)[@nӑY3srQj|lhrpe"'e'>v2׍Ѕ;U9OE'TZC#| ϒAJ.X45#gIǂZjJpt\`w<(IEGq/Vs}zuk ZK4y3xԌdL#`!_AAߥ+"PS=o~.J߀J>Fps꺛ai%l];涍%^6Ps;\Nu*"uHJ+E%J 3DLsh>/O7ϲȊJԘB$!39ai Ij #SpV6+O)jOyfW6cG*S;dJ. 7dJZHEm Ӕ<%tb)ً@F@G3]ڦVZ&Eg%d9_UkSI4+|kϪa_X{Va㿿$gts/FYu$^8B4)LXljHUlIiV8z}'ОO~۟^pHBYti2M0.뢊xtw3XVa3.Hggk,Gg'X۞ ]4~*"nn~wʩ",ڧοWxyIs3!'gŋcjƋtnj:g {~I&soߺr#eFnN[`mѕmΏ@kbS"Y~y>V$gsf\^8;8[I7/r_&D(^6sO=6s.-k[ \Ȓ0t:mCUYw7kfwsG,fbZ4zσkVhm;ݮ+V6PDK9aХMmߜOaRFLx`XQ%JI`H8/K&<aɩc.RHuO؁v.1k e654%Q `LdNDj+-slJR ,v*Ggq`N.rzzq7%<#~{=) h b$`ir%F"5Қ#q1ՒKp:ͮBܛV(q*  ,!Rjp?Ї|GFWWa0aO[ ւvrNઝJh \рCR. W X1 P J:P2W?iw:W X0 P J$6j⌈'+,?B\+TBZ\ WSym5z{.Շ覃Ɣgv6^7*u 'ˍcK>&<>Z^js )33fz¥o^n_&Iܣe2;ǜ bW#0KUQ %ϊ[>6$9͗Iv}:Td%kVR*WiyrL  npgnYvv#58ޢ?nsMVxG|ʆp)T'Yk'&Ķo<6z0IG5U446 Z *$1PiI" 2 {Jp5D\Yni2-gtr7g$PcG#9WҲg+ְوj%̷+IjVNqk+puhS  ͼʵ\ZɔBB\ WLx+k] Z"} U֎\ W\rn|Bx+]Zk]\ W; fIX1#M1\ jr 0a F4LZM\44&`z2&= V\\u\JNqU֧Q=ʵ\GO'q¨Z$-Ӱ-dI6z mumܠ`PnکίJAhq,e{+gx\[ u\J+m>@P.ftj+T)[qe0o#XR{eP˩qWRg+ѰňKJ0#}㪝\誝Z誝JD \CRjW2t8W(3_pj qWꀫQWJ˜7Y_pj TJN+N-1>MAxl] Jm|4\ WRW+;Jc#bƜ%h(E@m^n#L .1r7`TkLӒYÕG1l1\/I0O+Ź$>MQp Z5\ZEB,l1 WkƉGR# r?kvV# *6WĕT8P0gtrvmVQq*5  .df +-nR"\9̧P1^* 4VHۄ1!N軷ɿ/_0Mgl9jXO"Y}XbtĔJ4)bEY Wk[aIl-`^_":99T w6ƘU12I4~m6=^&O*2fkZn۪ \L$/_5tIx3i3mӸj;];q?k|xF\9v;㏷"d]@+_Nb,:D/8ˢ%>9}N6ÍFZE~Tkw,F,qO684W5d~:ǡbR,#>Ya4 VU"ǂVU"C4PIGVr0j5bbպMz{ _6h}OڤXf>^D=^[}=%To|dk߃bN Z먠ןVGڶidybQ\$ 1*Yg^wrvɮ:w]c_G¥Kc5?^] w&5 oawF4L^wg{y@%) s=ml9~RgԎpofGy1..BK4Ukwg'_{~arGư30clcdfQ̺3U Q-ۯpriOY`x+Ŭ,=z/.߯d#y!OsSB!Wc'JCQGu5&:|E A_\Rj`U)vzdK%^J3fJf]/ f`0\eUViLsĕ01rզ:k%PϷ1 U;W<ߜit,t1rU1}vYm4Twt>L ,{C  G PV{Ypu\*M  (\e\V*T9Jq>;,L9 ʕR+T槨WSWZ`,X|3-X9B H)jMq*77pu>2B]|3,)\zRpՊO*U+¢7Dh߉b?]=N{DSI[+Т(-  o=A\eS+iqUͻpu\-H.W(rZ)WYnO%#WLU{bJT>@fG%,X,WR jamtVٶ%WilNO4)ܨ2&Y)}QKȏvͳW5FU7FK͖4WJӕl gw\I/oqTbga,bZN[?lUB3 4(\e'?`?QfTۏQsĕT UIBbp*sĕR\`sS \Q̤VWYVWZ)R `xu qe~ݲU|-gDV+tqUλz7eEфӓ/S𩽫=jO4]{4jZꐢN  J++(WV v\JM;\#@KI \`á\Vc?pM<]IB7-[#zSsA02Pk>6SQ`L\ɋ4UӨRlG`BׅIhր(^+v3 ċWvbUBw[?nK?9;W|O>a9->'7P$8UH]hYJR F7!\|ҿaaFQ˽\WMu{PW?˪nOyF~a-u.?P׏*9?9ޚNOv=q6)iz+i$gm.X /u}^䧃jܿ$ X͸T$"Р"1`8ז{NT&(6%uDzctoS*˱TTX:]YOOLj?Pʧsxl"*G3HSᡘܞXN.\.Gv qtU9VGEuƦBV52Sέ_%`?[ XogoVG;ZsG\<#^rMSܒƭ^(p^4W|˞2Cǿv}ޯ^GϳAwa[oAu4|Tf:ƿm^?#n ɥ,X`ĤS$ymHE.(6%g> T. +6b_vفn27U/W[6 ?z߄OO=Cz?0""=J-^o-KؚZSY%}|>eog(h|3YNh>m5f6ooG+ Qlp.{bbL[\,:hDUhӿr,j(Y_e± *Y5}Kd *Xi p˻fK`q:vqnG E^5m/5J|! ^x \"L$*r*41TzC`.FFQh18N :g(:pDY &g F8tdA666&yDv&wϺ/c)WO\YCϮ^Uzh 2'޼i>Pfs:9?[;\^{&Zg#h}|U|P( Jh잡BH!x6*H NAO4Fy0NC  f(J!$HLjЋDOkB2AĭAR,oy*q( e, ߔkE9o\\0>np0sbi1Dy -'x 8#4`V*kѝ:J%6,q| 9سsdl2N0#$&r#x&vcr'Fj7V;jڪv>U3N.¢LcDY4n<|Ơf'^$c!E{&D%D^㈌ I1h 19Wm!bcP"ꆉ;"vD|:r`LRRZ5s6i#\`9.`"Sd4Q"M y;1g$( Cʜqyͣ*IW*}޺a"6&Y"C.ΪqZr 94EN:.v\|ІZ(!yi# Vy5dFHb$FKq-?a<|WlOkv7뺍|7aՒƋ90W1}Z̼ (Y/~sP\$څhE\r`9DY:XpY_MowĖr2.Ez5V;hYHZ,Ddd)' Qq^E`LV 1H]7[tՍ;o=b꾧;&_u"Ӂwޚyzk)kx݆ޣWD`:XqbPƢ,9$O :" r]^V Nƽ5V9 |TTf΄tI&J[ҝVi#\G n =yǟsrSفw<Ћj|q(<*R9;&y,S8eєw%MD1L :4t~:DOGFQPQ9).qF\}Qɀ9%cqP'dGFȓu뇹#/wTQ~c"dx% %O.} Ui*""RE;o;;a]6R !;Cω)j:!! g^8 PKsRrXaev[Rdp a|#dS`4x`lx*?Gj|mRѺF%Q/h'ӻ[=!f3o ]ůeC/C06U1ËW࿗*N`:/Nkx]mU}hW E TS;]<'#2sjXƧ6vX诳E9qS _cE6msXkqͣ1!R?͵Z3n)q# xzWyyF6ͿƦ|9];S3DZ.7~> dzpeI^#Qm{/\;]f=m`$P]p"՛ȿ-TڬB *2~[טRUOYP㻨,tWZl$!p]UkvoKw}iLRsCS(CuрhVZ;F"3H$@-;:ysPEJ(R' ,<;U ҶkTTň qFs#R3'B[dqdKE^z%Igkx\Ndi?3CTKЀu* k!Az!(jCN-"NCi Q-{ +@|0!ȧ40R;f2Rʃp-1*/5^#u$l5= ? >Iw9l)e ؅hڛHtKvXnh̪܊;kxYPZafAmi%5g%CN_$gt^p&Y!q$k}?ɔ P\Kbjh>/1V+Im`Ey2|Ȍ`@r S\HA)EbI輾V@05iaJ*D o`@V/Γelt^Odl>&6|`d6ʕ;\].^ʉ"`(&u r*M69JS1ZW-$P&ϲ[}KgU^΋e}"8+Z̵|qY-]p +٥wOBm-i[jk5CPee3&QFQL;h8iٿO;G筭զmUFGf̑0hR֯%O)>٨/ce5 ЉUeau;sKPǿyg_~޼8瘨g?<;{u#0 6VԃEA{4ͻjkm4|o.-ޝ>daK, ھ4?.n[ekV^|U._dR :"ZF^$&q;ɀvihq <S 4G* yO Ih'}iCCt8((!&9óRQ ςI$ +B(Ĵ{NѨ3ؑh Q]߽79xNRDYԝǡRTaSwnzGd.e N ZkiD$Z#y< NĐfA<-;l撓`ۇd0`7BqFR\sL6!CJb Ԗi˜4Rcib ளv%fטWףA(U|N(] Mr s`瀔6<!D6A`<@vg 4>^fbA hB"g"4PR҉Q Kg+겫Jq$\@KeVѸ䓲*"a"#EDM`ci2h'?ڗryuTo~@HP(8 JqmRQŸF X;aA}P8hC @Sϵo:'Vοhk\ :H9Rsw`PtBΕ-_is~1x[kT $ gI19f}]%uN˽h]u{p3PJ|{GHLG* \ jFH5*Cz Y@SJx-`l^"҂tHj:Ok"@N]-]v6/}a?^΢/£;jp 1|\ aAg""q&'II)@ #)6gF H>) 1&!Ȩ=6š<یҽTٷvIzH {vV*WQfo0 M\m#WfBxʥ# ~e \eHۣ㰅d'Vַgp)M[;;b5swkU= hkanGHH+dHNRIQ ]<r3yC)#8X,#Pu;ҽN'8EeT(FRck(尦R Fۜd2(:o;cƌL"^ˈiDktqkDtLBhƄ&"bU2ڨY.qІ{*NS,Ladkܝ!sþ,{=-QNtZOk6m`|62Cat$zSEdXUCw%;!d7iU>K3I`Dbe5K U}vHQ p$]#YSb'lu^]{J9{%N(7jvRLi۾黦@՝{9d|SmT9ILٜ >\4kpK([pCƓ*Kd#5U%[|%k˪&n$ZѪI$fs, Ā.&m}Ҟ&L*W;]rѼf.Vnx5"9G«RKfy6 ?Ec7[]49C6JŊh&u?}q/#Ev&ݞJָz5b`9ƨƮ'4TQ6̚\.F|=O;ٟjr OF~q}|?(V׾g-{-2lmΦ@ZN:pF e*Υ{lVŐ]Xv@ Wj%0z &qy5dd0FrbMbċs d]{߅@O{}=;^t:c?y |CEn^K\lR[D>v51;BThpvzrH,%穛f&Ntx*!-J,XR;?׻6)"qy&w|AM٭|'L)JDAmTsN+WQmkh-F?j#JeKب/ޛ@nS0d Y:^?עDwp -]ʜ YϷK7({ӏoAWOdz>F7n=;7ɮi ʅ<ҭ%|;|p&~ "9ـӞY>}w9痰}}񢟮#;>@[&3j#QŨ\Eys|^,3y&aY[Ezߙk牛t"v5ACv~I- uk ޴ &dr$gUn:[BҮd ? asqn"?x}_D{qꏐ[_o:>]KətQ*jz15(:)9Erwu׃>?)ͪhB*U5 lا)՚UMݫRyA0ؚCV7-^-dK\wmh`ctD.'<˥ޙK:Yi:d!?"[Z]F8H]pzJU!m []ІdTW)鞺5 R@w"R VmYt7(!f[lOMI_KͺnG L#mT?*ڹ`\2XE)^ XTl`t[w \ 4sXnlg EeiD)8,.:2x&Vhk Օ`17uQ4 MK z2q K.keBAPގpbأ(-հ֞P m 2KR|5 Eɷ APɔ2,~/@1dBKjmHWLJ5)*L @6$+ mBϢY vk6@,-V0!Юhy>ಮ Z `!S l0KkA$ Ed&X4g Ψ!n2tk@AgŘE cG@A. D7S/%%;c TOP j>XƯr`VdcD!VZ!h4IJWF.0QQfA*xZeIywD<@w/:=U yJu2J*B[s/ʺ@JʸHR]=YBj|7@!Ѿ)9o5J.8P,l-@E@I 4tW(&мGwXZID:384(gcr.VK5̓:Q,8P U(6ʠvNZC"_)06s.߸\~el\Z1jl{. AAZI|4xK K8Z8ot!X=fHmY4g&)Hh@YIxΐ6RVU^l¢9(aU6C,yg`T C\[Q4'ܝg/'NvsY=?<9n2Il1GwP7llrhfѓE:8%v=CNdFE㩹YS1c5M/ XBVL,6'=`e2c]HmO38%0ƇJFD\DEs6ՔVNG`PrAuf-8EP$b(+:O`n")%> b6\m9 l)HڥipiLMzr:: jwqD0W=Mm@w~> WBѬ+z>hB H|@$> H|@$> H|@$> H|@$> H|@$> H|@^hPE&+x5> D!J}@YH|@$> H|@$> H|@$> H|@$> H|@$> H|@^kb^C4µhDK}@#JfDNH|@$> H|@$> H|@$> H|@$> H|@$> H|@^jš|@PB64]Q:'>"B/> H|@$> H|@$> H|@$> H|@$> H|@$> H|@K}R֍vtx~M%lWt}w=%Dۖb[z />и" ίmmLf}}<\ > W/"C>vj5p5]\WnJ߱9pu]lcWս:> \/+-pЮI0 X QFk^ \g8 jjkWW#ZzjD +;h\iL4~; o,Ry~웃-nmOxx[5oؾQ;ɇK+Z{OڠrmxCG<.h̊`zL.Fh{0="_$LU~pxet\ɧMM?H|nq'HTWW.a9}{+&O5r*eEI%Rv(OTT28u3ɾGwm#}8jA<>ǟSc.t#L**2y)Q `@Q|,mq(8gG1i\]&ouDV I8K*wNT6dFNNN_lU:=Dx?e*Om^ڸ79T_S9;<FW=i1D-  ]1p0btŸr|uŔmGTt]E ]1wŸN6d)s˅^t ]ެL5 ؘ4ܩwOK ~]%QZY>)+StԢc=FA"Vi)bژ2涤j/҆B HW"hCuE-+)Ib`qȐF;fi4g+(l*,A-Yc\# }tmŽv?WlЋ4:i1wMEsԴkIC nPNcJEW3ԕs*fV24*1"\1ӆ]QW!IG99bZ)mu!z3AuŔ]PWхFDʙ˸r2m82=y+cە2U[盥N5?ҫ̲u]٢=x5I+5{,]1-BbJ㋮f+ [gbtŸi#+J]EW|jXZC\'hIn")Z1&\zzl7Yj3r\A>f8c3 Js$ܳ%SSJPJYJJ41y'%AaJ oW^5pCV!ib.D'MbMl\ibན4-*9b(EWLgfJTEW3ԕځqf6r0ws_HWLUbEW3ԕ6(+HW P7ƏuŔeu 1]F+5bD}Sr;[)1* ؄u&^ҕF'Ue,KЕ+zjу2׾ꊁ'ߎ( W++EWLUbJEW3ԕV;Ib('"ܠ]1-d]1EW3}"QwZNb,up3zIJ_YiT£+PwnFM3nR4Mk))-`$ 10F0F3+20G]9BcӀ'R;i+ EWsԕ{M\]V(FW ZVuŔf HWJ pTbtŸKuŔ 1QW.stE1bpAcPRtxaC .-kL^ttE]1]1mp(Qu*&pH/pQG)brQӢ*F(}+ƍbD*APtUܱJEC$}~pډi.`LU,zjk8湺"`AY%EWD]WDrK!Zt]≢tܻ eiJ,Lux|BVz`M[BN)lF ɮw~e+[%|R4KIcTM. $&In7DŽHI5ֹPj,*I;,(kKGE؂ӵ@a5)l{Q5)] {Z9dbJĢ++^qfsQWEW3/?Kq+)4稫HF8]1n3h~Sc~tCR]P={w^l횫;rpx'n/]KQK: ACKC-:m[BU t8.6~/;:Wwqz|ŕUrpxpP6O 8rAz]wt.\~Z|Q[}]hvIm'ZIV6L>Ҙ`⭹qʹZ"f12D̪ݣ%Z{JуXA"8^HuŔ]PWw7]1btŸI)G]GW&z'HWV]1 RtŴQ++R/ J[QcS.;Rx4Xlλ$n^w+kxUXjZPkTBDm^QT׻TzնFoϩ2joq9o ?Wyű=va;#!?Kuw<joVwg;ۓ:իat+?J7E4˺U>g'') wp ޴3~6r-~3:aur]! {{q}u|?\bSw듓OozӇwqw⺻{/>]S;n>߷aaq\N r{ػDWv5WJmP?StU[k kxF!iOrYnml.zf|l^ m@wa 4KYV뗺olh(4뮄%,gi=;me[)ugv`5Ы=t`H*@" ?55F{c{j#*~IFtt2vZwU|p>~E]5/AO><8~}ev=t8i.뫣17ynux>XUmub<5V]кUMEUe>Mס"T*t6Ub_wPсkm=T}~_ OqVN1>9lGZ}J=GC=Ҳeqvrw%>0N^aэ*TGrt9C1}n/{q pi|)8˥k^&nQG܍0[wwbl0Ӱ^|wq}mUgn؍|6IҺSd-WCUں5(TNӿkteU,Tj-|oQʩ>89npIwxz>`S |FChumTbq;b`gbokz\oqCb@TJku𨠅=cX`JݠJ]Ӷ_h9O :BlGmDl鹫`MQ_b8Y>s]>{}I|sGd #g{۸WݽcmvSl-65 >m!HqE=$IG*@,iCr)yN ?gI߇3g<2,RёXaVXJYFB\`آUPxOGg4^#ie=GqeAБz"j cP+9d)53AEőtyn e -1gRmFP 7vփ!C !ږǣU٪_֏e-ZGCu/kͮo߃Ȏ.xZuUa?cMy=}*Yb/.7s# O&6\Ft˜V )y,{SgYZl>q{$ni]1o =8w+ P,,"fa_^9ޛEߙےwe ( ;dz\=zf\ ߛqYI?90.ɳIϬ;KS*mEͮ$w)3*F ^o3z&:}d";>{2]' QY{'X΂aV:BfDK6˾M1_ҫU}/e }|vpȃf`9YuLA 06U50IgGZ࿕^'u&C\5_VFlݓa)~t``T[;]#O2k @-"V iv*Zm&~ʯ&hŗaD5zYsY{lԸJzQWtkӟҹVw]k|cFP-% rsD&^%}ޅv@ bo).ߌF㯀x@ Prjdw{|~gF\s%s#Vm?σӘi|p:xW =lN x`jJ9$HoB>T[mS !4iH|0!S&SZR;f2Rʃp-1*/5^#:L;#gǓsnz@z9~D%}rR@ʳ maۍy{`ûMT)VnmQ[^˭ 6y;ag Q#K!ԢV*%uXYP;O4#f܇ l\ sDʶ)AHSHd 3âҌ;$:B خq]J  .HcD6Z/8(Z@=F)V ZZ_$Hqkzia 4$A4FZM \+ NjDX]ĉ!z\H.{S\_ #o5G4 
D*pJpz|wFv.6uni!pfgL Dl6LFd Ӭre$'kOB2Bi1czP,awL ͝ad[Cjy% (>C?id582c'X22T%9Swb"\I1&xu:5J`̬T`M@ZX.ARa- QB;3؝YT#1gy32>a.-TM66b`d_6JNNƧ˗r(2IJAMwFN@T̻V/a (U&ϲ[}KgUd^Ɨe}ղpvX+?e5T/v@SmX.p>0|jkI֖_U[3(kmfypT(4i' 7@6Wi{⼵U6tնJȼ50}M\zVY* ++ŘnT?ng?߼=ׯI~yw?>瘨W_:Ku#0 6VԣIQnj{4ͻjk4E]]ڥ]vݻˇ,mBw\@K耛Xz=ekV f3<d2gYuu%U߮REZ00wKIOmphX^^GA( 1)9uޕR ;D€"BL{x]*O\^{$> +‚R6b\` GՁs0}PhMB9Pɏ~kmOFښ;@g2b*L~=rlp.b;R c1F1sIAbrt>)ʞ)ʞ)ʞ)J y2ނ)H%!KPy"8!RͽV (&5*Cz ^ wE%KɍUc@gApUu@NNħ >)|b75'`}r!ݺ9g9yoґ! 1LTo&qGI~R "R3H>)1&c$P&XŦգsѢȮl6,7z:wk'vdpS| ;hg/aRA h=\;s2,2e<ކl0re0̦P.װ#*I~aWz؂|OAN\?>"/I[^݊Q?ss|~-EZho \w7ۣLU\uȯ3U9) q*(y\Li+Rq (bQ8XP#Pu{+P)U6WQ˨Q*) 4QaM@ eXe&#QHYu2LwZ D佖豉hj@ѱ 3r:ڤVhFkdA:YNEDPt6*2ɣӱnٴ`"C({},{;+ѸNtVLNk6hir?MSyL”9t)CB9 F(chz%%Q#ɞ#5fyʎCv0/$aƈ"vag>ZcL |AH| j 3o ; vcSXDkؠD V" eLŶgV}zKTSHzVgf.vWhtQ?fMa *1j*44ܑFua!d* 3-S"qΔ$cV#qRu`LzUӛ/FO &\U7,*Aj.A5o StjPܒ,ϑ,3;闾/#%R(VlO6$ ^yF!u2H(п!M*uɍV[/l9,@fJ^k0̲0B sIdϩ|М;zgJǓAsJSjo2yr*&PX-bMs_[Oeu"jEg LTZ)ss&hZSA4ؖ϶5:{8'|rUO5 ,5ote(ePNm(2$\ӹHa117IE_{0k{X"^Ft;ds{e:=J*,2}|z6$ ]T(5xJ.%T֖ͮ›-w kЈ#T|nZ !YW38j&y!f᮪/wp?}I,e/ڔxF+n:?;=amyj 094ubS!%aetKr\}xW]4FŬQ;qݡ+Sgw'2.(`ͳQ*ɑ\ҩ][EƉ|vyz_ p*đOv)J< o/Nj! wsK=&RLPN\b+*ӭkXɱ7R.[uߪUUYTپ"[/ \@G,&>lP4 sfrpyp7ȞS L~4O)ɷGrM? J-5.R kej)cVM6{cU CA37B}mUEBeM  28Q D]FMg쮡( (d%#(8?ܯwޫܡSѓ{Ae*}kYT֬oVUyڰetL.n,2֖t*;$tXgI*RldwPF#<0,ba~g[6:3R}WjqN>}6P9$=kErLQD4~N \=SR:'$@v*~&X\cSmg8ǡiDÆ8bzCbEǮv\P{aE<%).H}Rl1Ik1Tj21+mV݆Cyͦ[ Yu JTbJ*AbrmZ4mJVBtNk,tKDZL!,88*:悇;{ b\~^ϑ6nO([_X;XՏ'lbZwHnۜrwW/Y- 7/~Ɔo h [,=v7/o'O=M>H\G ͽy%إ|N(=+0|W}6 \Ѣ;\QzZ 6-G'cf4)ְ~חv?x]T$tu͚bM8hنŀT:m&QE;vGrv?n6/M^7Vo?;.999=˿~d57z^SSn[ t ?f<'+H>R) 5ĐC&1[+y|4ω7K [_i?wVsz7ջ͞vBw8Rxm;:m~ggrp=8f}j"NܸcSzrwv'ww˛8hUhU*cf]lAR$HHMC - --{91BS Iq-rvlKU"mѯ3rI*ÆFGVݜcΩL^qSXZ4>srCpCN2U t.!YC%Xa4-䂞h050%ʈ^Bhl5Mip%Y2e>FiK+;VhޅJV4mP#dS7KPj'gKބ"&RK(2Aۃ--Ơ_/w)45ʝjJϒcʱ"X{wy5+m<ZrAU6K(Τ-2?|չzU,-U/SU1{M?U:\/t:Z8m:e@lӽm: g;q1ujmŹ6q"3~BexG^"ZXNy,<^$(ډ&LF#4 4d_TjbS<IlsJHdm1.Wbb.Y}U2s *JPg@ʔG{B. 9~S?b8b]7_nӀq>~ *įfqqs/$=1?֯XC ]~$keQցJ"73!b\IMg ŒV²=ӦW|xzڄso ]߶wfX֍oi 5AF5 bU݇rp-uV>krW:A>c~ʺQn~kԟ_<:#}O?am\/M޸\^S4lU|Xxn-d0bZdpuet4h68AёYyGDޮ.uiw}ͷR:Nj_.RdӦb`Քmks ==tobs,O3f;,ٛ[/ *`k߮;/O3@SI,\mal  lM:b6_!{Pn'z[2l'}pk?lӦD"y)z_S9OilT#8:G.B&exqg״sl`5o-w5"Ntals(zp]=rӛk; q+ϡ?;^ͷK`+]tg>瑐[y7ȍ7=x=5o\/=tUvrϿ7Q]>i{N["uGeTm&/ ~y;~HˍD_BivSb'z5 ڮj#ozh}<;؍+%%Ijv͵Z[.9RBfr.{UUو 7Sgr4ǬU-.)W \TLH5ΨV㲿6̜m1v#N/ߝyp`ԝU/tAn5L5wV:0>L1+^WxUL| UO1,+oznz.ᓬ׆* UzPWK-d.% Ͻ|seRI剔S!O@XM"a_j J4E^O|?.3 Y]WB[omd[' @S|6"5~&8ڦcUKC5K!M_g~-i%Rt'B|=D[f4o=ێǎ3S7w?0ۏ(<4V#/sI͑ >ϩ(K`5.f+;]M>'V妳x}dfKY5m[`wO_K~4Yzk'~~ynV䬶{Y&jrN*즠rbgM z>;Y.E$;qg}9q&U51Pʸ9WS5B[ԃsG;[;.EN݃zǮRD#hZd驎\OO?9 Իsfc}Niv $Z&PpSBLµ,ܝk;3z`kx oȁhPQ-^2/} x3S"QpVY)dL{ b, o2r2lt F!yGL>\^4YK)xʉ 7C4'! ܼOEDUAɛʅ[NI5ºmnST{$tft} :b|Xvɷ5|57K[r##QQ >cA[ E ҙHcG>XēŖb#81$MNF;wj* ;2[dr".]0CGV+ J..(6TuG9F(? *%c#r2jSNejR  :mqDg()A%&i|\Tg_e(eIb1jM\se6b+|K"g:49:uw]^,%R yٶX`0XAp*& >Q k96b@Ѧ,q zWn8 #;R}S #'-]ZeS5!cqsXF&Ed68/ MdWN#!Xm#dcm v%4bxg] P ++#$d BX뭋H z@ evYS';uOQEgK75gAE"aB쨘?%Q jJIR@g"U,Q-p+9uT&/ Yi{S4Ù(qHseԸEcj ,(l N˿jCR! ƺ]RY sE]%Io_R=08Q,"/o  $Ɨ2V4ʚ2`5>2@b$A db 5Vc{l-D3 xfGXf3RUc)8Y`>aU(@; B #39̰tDw ^f5KF lcBB$y:M7o(yi9@.-*]I6Qt}-Z*``SAyG aX@h~=PE"LԴJ @ Jw46HDF&ȁ/d@@\6}ƂՅHpK4-WwN52yn&k)^ I|ӗ5 1}4V~mPnBBIK*t/9l50 hwpc*2mI-` @Ak^Bj]2bF(N-{:LDA$HS8fWcaXT$`,CaXYw 1tL+Nj+h-^[δE{!:&Uk$k4RYa޺;PRv2VUC ,"lo42P I$,y6*A)rU' F;y؀`?|c7=dj7y\/̄HW+j*!n;ckd3 ʖU`0l_ެJvr#Rjdðr121i FѴ n0S~|o%o=fئ`,HtCUpsw[Pz uɔQA>rjol2$%2iw؊bPD4T&f_$r%h7/x `\8g0FhQH0RnB=x[z#4~ [ڨlJ%ǤZ ڂ͊@:pU 5j@%j_m L@rZ1{tm3J"b>u)Gbm. < "г= @ UgYzVUgYzVUgYzVUgYzVUgYzVUgYzVUgYzVUgYzVUgYzVUgYzVUgYzVUg (|O=z`Oߍ@ ʻ*-@G0\; yڛe"<\R! L2I&d':.a3ۼ UzRz8vUٽ WڒN4/M6&kh8N*zbs Dݦ=P ܦ=̜ucRR0#vy>o`'|$ݢݷE.\t: jmADK<| t}d,Ppq|"+ɲfu^xq,|ؤ>OXwaq K{_xƌ']Φ;ex@޹6r H" /,%}pƹJpHc}$9nvSu+0 `U.7: .sd_tėG^G VvF[*ڱR[uǞ);j,ۄ90jUu:m8x`T!&EJZG>G wB12pWg02EpN+ = n٢y^l(3=N%/\;ث,,ͨBbN1*HffTŝ:9fR|Tc5dE"a[` *qW{knNIqʌڱSBq6;YNovA],޴> J[,K\Aq Ů4L' H@8$p  H@8$p  H@8$p  H@8$p  H@8$p  H@8*8tڱ4/B:l!p1ްBEp R6gX'<Siq,][i|/6Uw= )Ʈ":%Y)W.9' ?H-<0JoWwia1__n|R4_af@O"-q!?- [l-(*bo=bqiY ey&IIXF}y X2]~X){zgtv7pRHζԳF=)hAE|{Ck\2q{(^>-h_[}TJկ: vaMڝ[:QE`.Ty]v*1E=XڣDőOuTvPڻOg*OTtVU^qʚB?kPk5<' PpÕaSVm;`'W(:{)CE(:D!QtCE(:D!QtCE(:D!QtCE(:D!QtCE(:D!QtCE(:DV):(:0`gEkěh>yRz|SbK?L`GR8 [-hPcJu`/-u0g/ŕg*R,bʿUܬ|Aw)y?V#:m?W~mMq7C/Q'kY/ZIB1|>zD K}X-rԷp6@ioT?gO*}T>\~J]}Z\ٶؗE;n;.;Ojw%JwG*tT&O%hy^* >ʁG]t[EmPQ g 9De4?uiv sа"r!8z%O`zc~rm7%hZ=aJ;u C9FZl!`U^V;$SpVgI…KEet_P`F{EnvCl8-c;_-l&Bjj Gv}!-78.Hד_!n2^|ۛ&е%d!J!aRg"Ǭe06uŸbK'ۀ^Ȃh6 KUg[!cW|QT|]kl gŎi1Oj7cml-Ym`wI*I:b@x+\Fl91ѧ̢Jwb6V ڔ\A !W *[ KFl|b$+6p>ݖ| SX[DG,Mdc2&5,d yȒk,\y,5" <Smj[7͆"]x[))u6#mbd.*x1&V `sB 0&+UaOOl%bqL}Vc-s9rjx ~uG7(懿"⠦TeEC? (CA+Nt(DA' :QЉNt(DA' :QЉNt(DA' :QЉNt(DA' :QЉNt(DA' :QЉNt(DA' 7CAXo7CVSL5` o֯nN2~שF'」z3vuoڎԡ0J5AۿEh;t͡p9)wKiR.0; wFBWFfr.NZթQ˅9jQcFzf8 gЪJŗO'G_YRESaeРAxg߰2p3 =!T:?pze\aI8u0>t䍯­Y37)0`Cj'OdNtS#fިsf5V[_XuXo(P}eKo4vii0Y\oyZxY6*IJ &Tjg-Yg^rJjf)2]TtXȜV1?50i, LH栫}d84r}su*f2q$7r.~i,.0b,*Y)pΰ'cޚ c̐[ ALkf"rjw\#n;H.qDʇg1+ 1BN=jLBՎMỏYĚd! t.%s }T&,'l}(g[?n;6HH)U+ nz@q?-Ό^I?oeՒUbEYG$"6$4&Sh#n'SrDc2g WvGW}D"gm=rzߺy/' M%sl`-vj!@Re!)YښT ,eNd U >XV/y\ǘp]s',yKILR ݍc_e7 0Y3J [qgzO>]]ìF[ ^&[7Fk*`\P%"vbp]qZidxfLg,X'35SJ[Batˍ2Zay5ٜ͆,O˃zH͗No7 0gl9 H:X'ϲO1aK-hE !<|,'oIk1-t w9E#q?@d#G^Y+7b@Qc*Glj1DE6@ D-iіĴ{Z?Œ֘})`s_p܅^6γ*R`wau Dzscf`[hS߶T$6+{>/z=GymGI7=#h Cۻ{a.{S`~NLY;aLG__׫U[]S7Gݝ W?e^Y?Y8sG?+pd\CŒ 9)Ʈ":%YժKΉ y,'mn{{Yaa9onmހLOy0]V3r)oI=6ιߌ\;#'!|ƐKkvJ,@W$wx-L@rf:^!Ah;pV{{ÔwuDtK1qO§'Z NwN5UDeZ:vh ,)9=RG &3cNgdHN .MϕhB&OJT 2s/N,~}G5# iwgIxWQ& UX1JɆz|Ȣ'zѝTfqd+ʩ0&/T`[mS!4ihpOb&y$,(c(,<kYەg()7bq`0lR,:* Q#K!PԢV*%uXYP;O4#f܇/D1 >`z~Cl7 kļ#R"%)OɜRafXTqg!AyVPzn+䝰ib]0&*lzyDry7J=X ]rNpTkG/nMG4BHIPBDcфrPP$XkoFk-"N|=!KHk9ͦԎ>'#o5G4 
D*|(95^9Tȋ/M8_6[ TԜvꈁHrRL+qaa>5J04.|IgխM>s@Xjxa ( }9(jÂb@:A.%:᪸4l`eOuu:GˣG%Q ŤNI8J`ȦS#`Bi*S}ALʘIqmNujc>z]y_<敳 k/̧n?gR=[۹@~a?{vn1ChH.#],, ʖ3,o}2L$߻jz4{/A/Gm&les t8Is@.u=E: >‡Tg7XczGp]Y_.\o_9?oO|s:}ó7/88 KAPz#&kpk蟞y[CxT-κ^UƥmNyɸɇ̉lvTf4R|zϻ#{|.[fR}ᛯ@ZN7"CϛPf ]BRz@K`c7MphX^m@/gNjr|+y@Q CgE,n l-{KXjKNYj@zg,.ֻb%hvK-AR-5*jsdU^iKU5g}]7Y]  )s/{yIINp59CS ̷EM<)Fﻗ𩶝TkM_)mw???>X%&+WBg>yM&V{}\WS4VGTRJmɌ6>)тfh# u7MnA3d[}-m4(B"7aqG^h&Ge)▱/6Sx ]cߦc`\beJOZ'1beP/àܱ:Z`I;szɾ@+ RɣNлĮN&®8HoI(@%޳o] U;ĮAl]%pΰz;R"gW DiKJhW \Jw]%hvv%S6=ٕ`C |wήTJ3 |l9([aWb*gJos/P4f8Jh /3)xFR-ˍ}z;/)Di%q_b'D 2:nG;d],t]:L33(SB{AM "NȌqjPݵaՆYk Zͼ@: (D݁ OqFy06Jj5ț0br`91A XYL^&KED 1h#n9Ȼ5pVLD79Zzjb7Hw?.*'[B5.Zk]8_jRqIg15Lƅ 4gQ,c` Ge#ʈ#Mꂴ[i̭F1b(3*h@lrDDMCe n )xMtfy`02N,}=~:Ħy!+!Lr0-UN6#W1v7jG#5SUmTMY)~}v<=mWs۰l!_wi ڷעF$EVZ.# u2̈́#8(2G|o7?ݜW ^.։UJnț, J4B~|vs61G2xMRhI,CKN572j0S5i1\ )sG k43=c2gJ4cnKTI$N1Mu`1NOA2,Yyda>%cahp kНb΂w)+ƵG5pg },Žt0MޝV[[n/ޖwLa|T(la3 BZ)H\HV %,Q!.0h)"@R=i sg=KYϞ|mGYPt -R[5aL}aA s.H@R9$:]^9O+'ޟ_:WA,hq}aα' ro5 lvyO!C !B奄5wDXKhvU !)C mˮk-Km0Du]r4p+g7:uU7Y\WXWuZ~ܛI{tȘV*>\b`dxUʥ')=n\J>W_<(,|y vRg WTN K;St}׿/;D[37᝾x=ع0Skui&;#? ԡ0p5\NѸUC쬒L@_s3uG`;eUA:Iu~?)%2*+o+ϪL50 ?_]I_:iʒ;J4- ԩx&Jwq)Oylp܀} {ɫw~~?^d!p{zlg.dI<J=ʵ*mvRJGjtp0<;zyM%4OE|gmW*FkG7>ϼod0ŻX9W ;awsYX;WG?7kRŨ6"e>E9Dr߬RerG,ޖ"r2&q,6fTJٟJ`uQ})|<>-? -ORBXY/4%u!IV@Xy|PnY)He tY]ɚPՓ-<@{j,*O>ȥ2@ J2(zr5Ijş~j_ޝ HeDQI6=Q'opq9n20C4@Vu3& wί0pGXɧD)Û 7/d͸HEyA9ؔ$FQd.a87oʘ=4 @f8P|aI?͙Ud֤#GtߵM;l8KntZ/nSNaڂa|PՇsGyʹA;a@uG)@/}wxխGi{:6~5{LkT.ap\1Ps\x0{10w4h,~װ]Cr݂gGHcP% I Yavic$2CiD"ԲA+W-q_kK-8},x1Lx`V 0 FHfLGO=k% ţSr^[Íkghųṷ"jB~­U;ㆡ|r{hF1n=GC9ekz`g0BҗH[jdiLkr./0x2(r=ׂey SKZG9A [A!]oɑW;\FwW0r{yqbo {O0E*r~3|KPY@Zk^]UuU}ܥą`Q*KHJ ;p$4ڒ-E20mM=S5E~hI_l23g^ݥ6Q /v?zfvo,iљ?rSF:F@-#@„VnC@%X"Sc%dKZN!]+,Zb6,5IXeI+a ʩ t@L<9Rd.`mpz@p&eϭ YhG}4Zgd'2q5᨞K/|4:zhѿr5k[l{wЉyuvRHI珣qht=+'ިɯcC9[jhLyX؜9 x}ۏ?rۏ}GRdh(cNhZ47kZڣi|iӮl7{W}Ғ8e񛯭IY|7'D]ٲX5$TA0a/O9+QSa ,\z ) -+yV&e'I~vх&61CLI$UI L޸t),FN❝ٔWctyQ)_![A~T ˌ#{` ׊Nv:UQkc⁁F{mMw1;GН.;k`Ub3tG05><7yU\s _蛫PAv*c΋eM`^aGγ&Dio.=GpЇkil'')NA᲍Xegf"TѠK4VY\9Ng)}rs*:ic.Uǘd26K9GM[AZFʶT֞[_.!ѰQAg2][=pza SZRBP)}&p7J+дzqѡ8S9S;ԓ9dv 0e1fpу, 3.aYg& e:"D24{$)&'+xhD LDI8e՚8{Њ?a6SJ݀rƖeK7K{L^c Giˏ3?6R]U(HJMi@= o^5U%o^DG޹L]QH.FG :?hTtGS?~(i k=׻[qЫ z),8t,-i>:s4tzm<&On,QV[ z9T5==s]|CzZx8i&@O?Mk8/J!yӓz<r<_r8OÅJľ;Ҋ@+ܞkB|ItU}((y"bY%rhȤ*E\& r}b?[͖u޻z9{H>l=w0 ,gx?Ì֣'_zsCuՏy-2"^R|XPh[>aM'Qgcrݾ:U_O|? G?b֮vVm,ٽm?AAZ}vs||-ۛ8\Zhoهw0|&Ћ_O6L6TMU*+ U)M%ީNuQ^ y,Q-ԙ[5 g J* dt)B-K,)E18+,Fޡt´eК8-.A !ڃ>Cvk:X6zL6HXLphJ-eKĖc}͇9րeטԎt286itiYWӣ<>*p"UIV;*/tZyb-9% {W3~4Izni8z 2QИLł}JAMߺ'k >P-Rs} 3RZ'g$_@%LLZvGSpiqR8N꼸He LfUc05kp귷kpoox57$7Dn|SVߔ D|G SM@3O~ܰ8|oҝO+Bnm2"խ*KW̚LؕĠŶkU[ۏjcoaٷˏֽw .ٓ>ƣі [04!Wwb17xuAt`2R c>4g?tߚ%BkQY]Ngw[tg@ܪmܽ6U_ bJ3*kCuB'x F@BQr#w٣K:"Z Grc`S:Dnw1Jm,$,^(th\ ̽::Q9 [hтhdn֚8{D61 yؐg0`5[qܠmO Sř>mT0UNm){ٷO-/9ʽb|W4}*Vewx .K۪Tx5.TїC~yF%!(a*e|L.HS%hpJ$2qu.3w㦝EW/h[^ӱ=KTk K[TqKL3BC )OQ+L6#A]>.m3aɄ ,y2EM'ZM9ԚU>HYA V("BM~ 5c Y dC@Nॳ1zxN'1MѴr945V]8m|6=&s̙@4dfT)JͽY W9KzB:& r =h )hY!sVII!`vwkWA:"T]L3cE>0IH"b2 \­)Ђ))!Кɬ~9nv%ma,`u`}R:nM&W MF!g< c 6AZ V ^aR[L96ia36L{&ǜ/z94Que|Ԓw`J, *Cj+2"t޹;.<=e[r .2˥Qi"VL!3ẙlY[g8aR?8ۯcJ.W}dݺc#|mqV$S4.Ёbpƛ Vq:RJgyL<)`+ xT쳭>{EĽ`];뙌.U*&bETu5AjHR VUйPNwEjvf,aKO^.\ /uo;VYGRlc" #|۰v3Y M }١hTJcu?6BrRWRaGXZtaܽ;^{xjEsȨh-7(Q"MIj>ݏyw'g_Z8"p+ RC}Ue%L>RERi'IPE D Lr p^ G Ep\rux9; Jf_38j{-Nَ^n[=Nm(>y7ԺsUvgx]r"h+ݲh]!Oõ2tU4S`zd9MoHK'aOGd˳H,Wީ,sP┣RPvfr^^],Gda,4J1YLnJs@ĒI'>3>/}|(sCީ_Z0V#*fT`u*.'ª1R!iOf(OAUb!HևLW}D\'$,&'D?-Xx;N? HHOPT剬>y)xL)-)wf/zT:ʣl|%Bf|@e  Ԙ Ӻ ’D<$X1'hve@|3CoS,OӈWG*4ɮyo`ZŚwAo1JUPg /;yۜ|}`f-&6l>e?k0;,NE|'}cwˇ2_kj:YVMҳz'{$b,q>$\ZZt+U㌪q߶2f h}X7X-o:4`SI]v8,26q?dEPREq Bc[ rz`8̜-9ՐjҪ?wuT19 XQ3BU1[R]!VU!c[ Kl@.g[*hBREŒz@7÷EB:Ip!FT<Lu9[v{ջOJK6lx |}úa톛` ]͂ɋu`R֧륽A~[$$uWPbo(ANM P8C.UCcnbdeQNGj Rqck0YlGuSWG#0s9KD2vB~']xömLJS)VVxǿ<`ǧV?8b'*5{תJKd9R p`(y}=d)tNP䡈 ^fgz%p6B%,}R&DJ35=̜-9:vE0 {FnUIJ R3[(`BVM))dͫ7EU - ATrb%%QD..Ά'Zsknnau!cV f|\7  /(Y[$-ُ)ݖ=y;؟\ܛZsy߭ vy gރw~J F \D~pխ~W+K.0OܕWKyZːW/~&L ^^0|^8_vp~zqy(믩Y'FXz/!Z;7zd Lwkc;Lw+S\`´>nS{@Rs}nKsn%!\iȚ[up:rWj-<J=Gꃭ>n0zpxֆu+"#\E"W`?wD7wD! \=KJh~8>}/[;8竾,ŵ?S4W~ݸmaܙ]kd`n7 -6+ ?Cx_#N"Oڰ?NMgN' \J%g"\SB%iZ蝏1SJ/X/ؿepzUyZ}8J1;X9:;ӿ} ?uK.^rW?s}߭ԣ>N[;@:J_~nu?ng_Yпm`?}F~w/6_ÓtuqTW7Mk tՃXq8?;n%RMaOK=NdK[!lb,IbtLdE ,.X6&oJߔ~'BZMϯ?^˧09+޿ =WIJNPqpp#qNyKym~=ysrdC{G:Jmz/ `NM'p} z?~~n%_^: wJ{(ٸO`vovZpf[`iy;-HOlWrZ* bЫ7ޒnqIS%"ӕ q?ܸW =}YOQ*hH% bNV$F֬1K4^d#e1ᬧ;yn=ޟwyl0>y}.g-J2N4bhfm]D,{vN3svI8Wi {0f>g7&h:&Bq쀄;u]1WnNμc_9/NiZ:S}S3nxe_-v]aK /!cpL;]c5i-A;0aJD L͚)uVlvv|Nl>unkjnN"l^%Q28]*gc],Gda,4J1YL4WbI$LV ǂX?uxal1P1tFdݕ㭽L/GM-0]zn'z4 _rcMmu@{PM"DX5f^ >$1-l) 2c3X,qYhq۽/ _j&Shb XsU-i=H%&vEtI#7'X( qeU;.%6}f jK0kPaT^i㢓/{m_v]W(‣(W4%dS'(UO&!qsUg羺ǎgs4lM8d) x0m+CI(6$$p&SC-Yəj!ު3x۸L~^\om3jYO~c%>s1 4NlWxK=ƹ f5s ;pws}ꁵi}t=eofYsRʚHT넄%dxdғ޹(_}vzw_e@B44} .Od HH1 ܙ]Q(> 5J.G@ lGU%J,PcƊ&Oy& K$b: Jؕ L Oƛ?ǷN#^7@Z*޵q$e_69H~0^'{.FQ-qMZᐢCdKl=f5=]_WuUEC_I^Dp1' RsT&WڃtRs+om}]>MLt,^tl aHVv}7ɿM-ͼ u 86FYu sXD˥b\["O.[Q؋o'dpL(.AP|4i- L XYf; e(!j NH]rgW,Ul,&Άk!{bH[eͪN/ ܅"$~y̹dUkШZx託 **-:QSŐp][i%7FPeZ +j1q6v;ͩS 0nrWoɖoKOU_c v㘜5 A;blk=.vzs}Q7 $hip 2%+Qs,j}D8OK- # "2:Ř'92&z dzDp;J@5e42ad,g#"b*G¥Df2XlRLa6vw t#U]MԵ%dΑ9DY E%wJ;P&Elnx sD \頄W"jfNcU)@"Aˆ]LۏGq.6:Em]uڽ>{K4R!Tf='ڡrpP3?zAݹt!g}H%-:J-yPr2d]MR8Q!K \g&qf9J! )U $EcBy<Ҭ6Di BG0"a#YG :SIBǧv3voF#wHg북V퉁G'F^kWEڡ*iOHEq6`M035끧'CO'!hh#Ӡ=ꔐz+-B~QZJsk:ꑧTT**A0,~(C4_/Jߡ7M(a8dc!>(* ⅹ?ABR  ^t8㠣 `k wBFMΫȢ H)L%5!H묵}ڻ-`|r jݐ&$'u:ij)OgZpOզ`1q[ǣi8IuC bmc ?T;d4׿nrTRUcUj>:m74;?Ӄ_Ʊj) թeUQ5BU+~]{~/'jnC:Z Cot"t,JWEb_C+У"(GYq}c<޼o-\N:Ǩŋ^63B U{Z"YlovRɡZ?:Z7ry!>‹r)_Y)q/.,*nV!,4AvFzQ|y)OWh,U>ޘUu'r״~ﵗ<7P& 5gBa|!kݍk.oN0»zR%ȴj[j%.fҗwnwy']o6+~hُ_P⿥G !h 2훝96 )D8s+&Ę8$C>L$y=tqV4wE/*&eOFjjDh2kg q44%\:=CbAᔃf"m ,E%~`&2m4X&$wR\[JґpIJ~mhqE+9xu j{@ Tߤv'3EY.Gy᚜vwaqYsI=n!>8zQ] 佝u)N8(CͷdT+)n:s?:`hQԡKA./'5:PRx2'jah:^Ώc3a6eUu i{ {~ /W0td6g8]M x4v.jh%g9 }\xj~y^^?e_g4'^]W(j=w4]Y 62-;w,lI[:ju5ͬYqQ؃L> =nsp>u?#RvVj׳jزi^r4W>OB+Uy%j<_cM)knpLftg8ߞ|7o_ߜ e_^} g?a p)Y_ww?}BӲTޢinMsRߤ]^;ڽ>leJlAXέq~\t(~7Z6yzDx ףXGPlT[;[V*5@Ab;C} I-Mk$ŖW'9]V׸U|\ 2_JHd oHܡD{?W7>.9 OyH!}L ң8 (Rx)WDbcbK-\܁nYaݝPs3u|ꎦZ%u =;ɝy2g4LW/CM!]^4_3s%/_G޸˄]1C?xrV=?yKYE`'q Oؿn2.&܅`zHr}J! 6zL7Y^_uyq4sKM{<7(5HқTky'/ϋ5q<[shs sU?5~LG_ /jq't.'|?T trm7jhwxPh1uv eqbY\N´vCqp$ Njf #γ$U ^ؾ=ʍfݍMC; w~7@$$ i+s*(&Ii3r9[ _ؽk"IN$J")!d`$PO1&$L+Mx*&,u1瞭~çh<5~EY~ϒR)z4Fv \ GH;Y1ZV"]ZԜs_ ںN%AN10!tTBBH2> fg娙 pTށ]o#pni2{{rApl'FkdU?V[-f$ɧŪw;s2 bwߠ>yslK={Zńe_q1gOC$J^aipÈ>DR6=i;Ϳw7[nד_S ˴f|ܴ_KZ9G~"s?[5_?wMfn~Վyk//kkkl?HSW)^Srۙ>tk|+![M~öYZ' Z{prs*c>.G!A?:S-F_BޝZ~f6:u%C*GqdWΨ!4k/a40St41v.k3(ڎIcl97>sD8Ag!08ol Cmmej^(D+u"^\ 7֌xdܭ/|pw;<?in3i}YE~mx{ HcW]Akʗ׃}S+<y=OX߼?y:e?r#խQPGqi> +ݨf&~=<7tXqnaZD6#mkss7k軛?YBnN9Pu! szS5 ܎rCn؂ZX,"00ite@`cJ|CqJ*h`n4Fz1ihИɏ: \@:qlr^OrIit6UQ~XM+n{jGfa#٫'Wc ? [ã8eUǘNS&vq c| 9㘭+'Odn+zsqdCW.h7f BҞ*-iO#|Оtk)5\—?5:N1Z{ge\"M#iP\޴} ~9O$|93? d6D.wq[շvL-8PGvA0ΒI8ZZrޝ+UNJ蛴,gC11>C֣!C<&Ɏ=cE ȖQ:YC o9HӒZ؅G =hb*`N*QPk52+<κA "BcvN >ڡWJ4ԳwGj]B#Ν=5.sOyQ&8X7=g*;mQεr~ d7l-iL{JRC-=}{F\\B-5W!#&RBfbIc[\~*Ml 6uI'As/gZ`Dc֨jȉ\kji#'j](·imFNcB(tձ9?.g:L`*Iκ;LB&̦ICږm;L-ImNȘU8|it*A{.ӐG%/ZYm1s|^9=o].=3(ah)fs?@ǞiT6'4Ķgـl/)%3ۚ(-3=lF8x1+zqe?hr4M r#HI #;d7BҡnTy_0=6d()o9;.B`-ErZj4 2| cYږg֝+Uc,xdWczDCP) F)Ju4`L..N-A"d͎^YGvb9x<>Y ԅw,.~qqmi\ՆWٸM_~gj gg>ut"(xXQB= &]OBZ.ՒP/j-=^T:Po~~)wO[x^y3Bf v鹲+g?楉M~@ me e;'xz㦰R>Ew`Z_ E-iV iBvOk.xt5Qt_ƴ^X,*/Cnm~m!:7~vȼ`{`/a*%~K5jpp%~?Vw^4*n-tO̸~BعAQ!R70qp&-YrNdιuno.ǃͱǵOC_lo7}=n?J#l:gU(DC&z8hQ6ǝgǙ>Z;דOcgQ?8YOmfٽz"(*7\-W1 `_9Zn5̢Ǖ 6\-W< `\\\Z3* W h]uł-c]\]-b[WZ"JEbAc5&Ԃ+Q[~JTBf~pzw@$hOj$Nt`Z)up5Iv9n\[(!/`>\`prAZpj;+ViZ d]`cl5@J"+Vir,벰Ӵ,Ԝ<N a +4 4 k[DJǴJ0.m`ԡ\\[U)j}(Wj{*•AUtrZ4Uiq*T+sJjX91,Wh< U++:]˭xJTB+2/T+k+ \&L+QD9p&{i Siq5M=Q}J `SJW,W"4ޗ+Q|qe,bPqJ:aZoJǕlZ$[P\`4\\k=vq%*{4\-WqnOʝkOUC0n U F෼;L Zzt% u)DeVi:mޤc̢RIG=Ͳij줴Ӈ%P5nXMhb=̢p#*AC _t UjprͩLRk] W ĕMVW+<ň\WujtMSuj9 qǺbVWc]ZW|ұ jB;GpUՂ+Q{ ~U4 W y&\`\Ղ+QKt\JUv\=ޟ)o;}iw$ɥǮ&uTbW~|[^{V+LT X.h] DJTZp@\BE'1I.j+QDU W ĕJTX] DnƺJǕjrʼdeVƜ+)2P@j8|W‰vOS' ORJ|&*4\u5?tW"A5jt\pD\BtPD0փ+KՒ*޺ݕlz\ rK?Wܪ3)GL0n]v{i {֝_7b::T7bi5inmai҂Ġki6inwm˴-/^RMJjB ,WPKh՚S5ҪZXbh T ( \\ʖ+V5\-W^!U+U+p\[WZZ"qW,XW \k=q%*nZ PaM+Լ&뱮Dn0AQj"XX? irW|gPTzp zu ir͉Cp5M% \a[^U+T+{wzW:DenN\5vT jkȵX D-o]JR WKUpB7Pi̹NYK+;o?ng)$7EӢ1-*Kk0.lP;]DWQ1,W"\`\\jpj^p\ͺZ"Sהɳ$pSeOSLq΢)v%}= "7Zp%j|3VIj"JW+p%jі+VUs\ўSOg!P8g`R'9MuNR͉8,j7M5\u5"\` P 4X\Z t\J WKĕW,ءW,zJ 4\-WȢW,W"שZp%j.W24j [דdl~mM]M+MB۵[7.MZ#",!mH8-p`gv R%$;q߯zH0:nd`{ɪUdh8y%zkSMA '\PZ +C]r˼Nq u Z%O >Eҕ5ZAAywWʮUAũUA HWX- ;{$.XWUAitOW?] =w,8tDshs},JqR;P պlOWt=/]`mCWwQ0{R[zz*uU ]ua骠䦧HWҀ<Z8a]s&s@ۑC;DX+&+4Mhͳ,_{~94Mfhk4Ixn;B˂-*-5>-Ïy3՞y/77o^aj.djRIJYacm I69S^rzU~.kH<+ҧ̨U7,T _ORL~.=P͗oT ?aWpͻ7[p/1]Vy~*z6 X>^o}u7|{B_l7~)yU 1-e?']¸坟ۺp[6~MMq6l&<)Cfpϳ! $*sY%LysxEk[,;Z$9isXCX. #2,w)(.C>ǕB.2KəeNN0A`(l.pģͽl|4wZy잻 8Z~dt}[޽-tWks~g9F;Q1?cO3鰡R47WOg-Ɖ׳} ؇ _r2_zS;qơ&go^,=ϬHEUA3R?惢g]($U!uӨ`<u]EEŌA@P`l%y>ZT %1Bjit 0++j587MNopgMLoQ?[[wk-rO''k-x~y?9l[%K}kwW$U}&)gpjVʊL.Z瀞[ށ1E{we;(+:(|(RaNxu&9'ёg(Q%G4sIEEUR`m"+J`j "yL6g & =ZX9̌lj\XM2B̅ʅD2> iyd5Mv~;|/ 4~4| xr5{H"dy"ZB2p[H!ixOZ22+j$ AHQhSdK&oǬB9Tr̢Ì&b kWcYTfmӳvo'd$#PRZ!e2Hf<)L #c2`#3Άm*fM\L!WEG&! Pd8GR։Er2Vs[~r0a!Sajq,#ʌh{F~M$<9K&Z|N$ZUɡ4YTeDg9c`ƨ.2%rᙐB 6F2bIYX9̈.:/Λr\g5)9ˋ⽖u܇(G9GɑKDf-6 -+D#Ys8 œkIǩ8mv5/#vb|zbe ]$^#o =vqO7i~U54 8z1i] &jb ~4*0\N&SX̆䙂M,kMvvΫoB@ f$e`e p& N>֎cj;]w /i @' Lb頭0(ԕBwPFUT$|։3cynTSR3K< Zd?^R\+LQQT9l'ʆ<>nˡ8tM{ENnm톜!lNSO6@k`9>ePi986AQ 6[EyW)<{Y}MLH.rWɠR^e:3طL hxu&mY5Rs~ ^4Id$6nYMb{-˜^,O.o<4)&CƤR6'"!R ) !+`" cIm]ݕ-5RjHަd\VPdPJG"D(g€a2׶9l#mOgʹՔOmy}..G6-$KSk714Z4Ĭnb2;zx5745sNFY07"Gm#95?U2 3 ߎ7a hw\ ^xIb$T!KÜUOD ^0JgYrz~.̅@o%rDfeY l1>i>Ά7+_v*ሊ]-Y}W15zq,5֭0}O* r J>IᢂR\ BI4#g)ǂ@9%ytֻ=pc'ʑՎﴳ~%db6:_x_>2#~,ɳttKk`xgm-pugN.hA։/baq9@vHyb+L{iEY6\lI$ɖ~N{\퉸ay|..㖉Vp1ZJd49ui_C|m]j| XKfd !,D~+gL>p #KL|cNiRk ~sb] a;J fYrxY+ + c1kf09MshYEfq)Q'JrYY #Y>s=1b*z B '9rCeUPs+ P@s2& Е6MϜLߴPe8l3g|ez|3'K>^> %]nwm-wPY}؃6<6ɂ` "@ re-SI Jf4p-ZrSR`H-zD<2Գzzј2tI4tJiN(_hA RzU9ȤMy0O1}F Eݟ R ,IN=ϒ!l (AL kTBGO1^iz{sb>a8cGE=a8 JR / *;|q4Ft5FIDGZ$O1ڂ6EwI62f T_^LfAŲ46Z[7|yPu@z{9||C~f|SLV '`+Q(n6D^..q7$ ڒ6sJX"^T[Ywn7Y*u.65aQfoLm?}W(ZA\?\ƧmǶwErrEq,;j~ݎz Bf:j/x5]`L=կ:HH^B"'68;t|+%/`&iy1N ՍAkm#9 _v~w#xwϋ 1~J)Rör~3|CQP5=U_U]e"gluFNpwŏ<F~DFT0;nqsF5x|*"cLa=3,J{H{TŁuP R2L[*u?/ ]2/8܅ZFЎ~ noJ- ŸҚ3Wf7;/U0w[}PL21#uWah.rH4uji #Vdc dFsOuFD`ڒ@%,S7}UcKc7!Xs*lĂgZEyK0]Z +!MQHʼnyhw [!Cc9p^~ߴqJ'.ߚĘhiPK.()INzt#I+лU袗kTug?4=ie"!CRc}|!f7Kc8W9?'4H{ΩbC%@33$2G# &PJ ڪLڋ>4} UE7Bb`_h]qg]SF$?i'3Ey2nzc rPLsI8=n&ĭljxA/(b? 4rgۉ3 a4?ݐK8ܟ(^hJ,L1=7VJ8jѥFb(nCV/KaKUd,eQscKE2%ߦ+5C mr3U gpP &h+Є Nӫ;9aߪ3/'W0sz˫oYRZIVWW?^ԵdxmKBnn麮V #\6fQT<բxEQdu 4&֗Z(|ڲs< xsycff> itBzq<##ňCGk/8߀J2&I9B m)8(H!9Mh'ddTL+6oCQSw|ՙK5;6]}0[^P`C7ׯVsz}qJI/:UpΧ/ e -B%QY;Ԃxz`Wo~ =-8q{$u*We^CY 9wRpn%C:҂08ZF~ 8!P\B {T9'8ubo9,Axx[u{| DA2eu%pϝ% &pE`hCy)Z:41L 8&.x뜡BD Dy &gMmvL5kL6!a\d!NjdrT%xGx{[mٴacm;nnnwk+<nhtQx> GC?3P*9S,”Nn.6 |Y"J0@pV2&$ z="磤5t!qQ|ߙvKUMޘ\'gQmzB;JعL3N|`xXbwTf3͐np&Fw&E)O#C)B:d-Y.ytQuD,yM+͎+&XMuyP00>82H '*fK{ r~S*`WV_Q;$kbs㣗&m[nMȭUeճtōne>N~= geEuuEwWۛw۷ˏֽwB'/]}< -ϛh⢖}cnTZnӲw[O9{٥Ҭ+Z@Շϗ^~9E"6SnqSH6tFJ糨 oomq "$Bh+ c-2$@T] ACnu{Y[TC+BntLLzo UZzBREskc#DJH ;wx7[$L*Q:eḁz@YF2[c@{rn>Oe[:++rSTv)>`j0UoZ&^1'z$j\C Ͼ͌y:?'S3:|A q .iW)0#  u(N<n]hoT*x1t7~9 xPJpi! 5 &t~I3F1+X#@\k1&0֭L-a=6^ӱ}jHKIU{w`9.:E 0xH Ái&b{Bx&‡sG}|%ľ О\c0E}y;ُL[a_ ~NF ʿ_<(K^<{G^I®_o%0gd+d'.Oؿ㤕Π\8/J(YnGE\γ3^_ F=d:thI.sfU^me_Nww3[K aF횿ΛPybv?7g >l<'hF:ԩ'U㊧IwkJ\Ɗ6n.7~?Pui`x^w԰,%å$`9/ zZHXܰ#e-O :;!a%7Vp^Fp4$ҙTx9X86kSVMhqzQ(4!j*b %b@)T)M-aX "F[ Qޢݏ! ( QVZ7xup uҘ8 mMW) 1D@iTd \x-G!@1ԋ`(=HF8MCL睔G)5X7Qz%Os qKJ>KL"Q)gL: 5rо@(Y7T[G.i[Y̯$ig3u#(?殭9;vٺl_;h(:0%t+H (gM[y 󁛠s"2d@u8ޔׂ⑜Qy w -.Qccm.9`_UE26u$KChNjsu:Jqӳ8>t.djQٻ8n$WBP\vvsE*DɖC!%TbiGXV$^Wo6iѤYdW,l--1kb6HI[[2?:CJzĴQ+8ͰLQȉ=AOVTj社49bqۭ ơ#0j-G#8gd5R(:;.ڍfܽX׾k <a*:;brke xuq~KIifxLYE7Үߠ~[gdW^޾y}EG$P-eۅiK^j6KO~>Z]R?msxw͊Yѝ͊>3NU7d9{N]o1L:h[\qlr1M}vfR`#msonl]\Q x9O-5?WaUF|X0,+q&aY}+|>Is_wOȹ uo.gqĚG']B%wt*S2|+eQa~}؟gϾ|ӛ~so賋quJr|p9{;2Uc 8lw\|F$k(ޘ}l%$[raݙJ !̆!lm`z#̃1w:\}F)[FzϽ#Cm4DJ7̒+b21w 7Qrr9IX-67٫WNri ڊu> 1D&pC:B&qLJz$8wcFJXH-tϭF.BŌizB6b>al\vV+Wcl|h׾ ;2̜/6{rڥb(=`CFpacM )[*e`xn.s.=:&J\nc0j!sf/`f1J!yd!8|Aw'$~ ho^Or^!JÕ)SAd*Y|,S-$!wj4WU`zm5D8g:JI0f 7l  BkԱR[aM$)w9$5nj5oQ3Z|1!mlKn dDWy0m+@eri~@:76SX1%pe,T R!#& 4|APS.:٩+_PC0q9d5{SRR}<}aK2[ G( TW) q 2 Z*Oȁy`Pm+T+#I׉ tfAiP^bA^|pBqr|S1f@Tԃa9A_ T`b=rLpuD"R ^|XXAܨ;$!͈ ǀ 2=,&$mͻ9%ۈj2֙Ó4HQ t~>O.:U3d6-ٗ`ywgg/ű51i\?I D]x'G՞F`اn3ljY:Z:5W;Ȏ!3d0-Cl󓌑>ߴ)H#({8nX𠗨=T9P/ϐ +7<6]r5wrwjv`H`* d[SAP <kr(jqF RVc pJ>hd-8y>x&hè}H@i16]`o#0`] #C+υip j|d$NMmn6H7rXUn4CC.Z*c10&o")ס m]0pf* u٧H]C@q>Jћyjj?6zm<g[q'gzP_,`~a%I> H}@R> H}@R> H}@R> H}@R> H}@R> H}@OYwH> 0t@OU> H}@R> H}@R> H}@R> H}@R> H}@R> H}@O =$P4bۛAh@?tI}@OH}@R> H}@R> H}@R> H}@R> H}@R> H}@Rd}@H|rBLBQ>'rd> H}@R> H}@R> H}@R> H}@R> H}@R> H}@OVҝ臟OVyջv?]{oٻӷr ȶ4f0ȡؖf>>vҌ2mIؖ>z( ǝirtT^Cߨ&Jrq fQ<ܸ2Go^LgdAg>'0rN{ï3\w{dwfGlS,i rQZw11b]zwvI^e@~~ O3epVrqZ'=?<~ˤWNC&h~,o?86t]L2eacRF(K/IhvB^^h;/NA,qW__}܌7Z"jn> 79WiZ<} i_ 2} 2/J雧9ƲEލo^/hՌ6mWɛZ,v3 ^x1,=M—$o$qQbN G._Z.jRGeIˋҿ쥔4{mF_QSn9BV?9Wٺ\ݗ˖Ùu+i8u7xKDH\x͇E>ׯ(w%P3sh1jѴ>wxM e)j{tZ5dKWBt] UWUS )Y-Z_|J(COQWܡ"]ŀƢ]13j^1-+ jJ!F4I?=+%5BJ뫮&+ Fg:Т+-3(9)J(KYu"т"] pӺܔi)^WB jJ/@d_]1GFWJhc*]WBIPu5A]E"]1pP4MpAa:;X8cuM#/ϣ+zd*BW+zjhJO5wyJוPEtEb`7&-bZgt] ei^FW @埁>A wu[W^n4(4hZpAv1hD=b\ EWhLt]1 Pu5A]S+{fv-uŔ8fWu5]EѤ+zt%Q!MJ()V]MPW"]10((u%ϝn] e Кif6=+j^ m*~P4uE1 ✮I[ϵ''E{/{Cãxt^lܗٜu˴7˹O &M,ʛΰ;ef|=B0|92k %IOp"?_p_帺)vι7v/!_A6]>х*&Z8\’mC ?E߽trZ\GՋ^t>u6j?Yǻ]?׵f6xW?W-DB d%S\Wi/\\Qj* ]nnҷح0+`IneM/1_f`&?(38_|/ov'v.7UC; b7sc$ᆑxюZ9 'KMsM=ٓ?뇋%z, ԟ29Vڶ"=p1l![4}t#9<?oxcx{M\%A gї1ϨK|:osmS~ןY\p~ k4h@yp_hw1{缻ωEl_ ]\[=m)}Ӝo>62g>JpLٲ)$1!=R\Rn5GLwW9$>.&76ۋg.>vq 8Wb&Ӄ&vN#|,:ރc,N}pMi:p#KWqydHG8 6#ww;e`My#$s,xrv 7>ΑtHebTGd"#e?٤D<&-|5]_`Rzf[-e {Z>d<|Q#u㳣J>zD 7 prq-MB XpPZW&8X [B2>\> S12G{/|c$רXr$cLgJ~k$gbFˢ̂S n+FKYyXuxy5+ךշ9.UJ/[J.pemFyA<pG GKƍ3HG em?1HuU3ox JM|Dlo׼]Jk1wauYrk_kfdC k_sVy- S`ۋ6gA Rz z^.4nJr:]v+…-%/|ˎv\q/T(GfYX=ջݧCW|Fp3>qQ?ڋrwzߵn?l{!k- Nown}]Ӳ %wJʺǍYA/k,G_CYq}o9I4v`!ٺn3[bF&YUŪoу!簣SM(+7޳LJƆ^ټϝx"'G3a (1@R-"B($ϝ{_5Lggh2ms!:y$2&~=H7SO!z GI|>rx7/Pog B aih:<4*sH',:3PIjYjì^ ʐR.:k$A eknߧ|2wDli770GIt;8B3pvXȩBp5uVG)YR "e!Ǫ:w# y]2qN qXnFR,2_LksE]|dTG= (= `~\ĭ[+V$n"=`$TS3 DZ1GBŘ'Eie m_ I8fQAzԼq׽<1]&1D׽7.g298. _n.-Ly ғHz|bN`0\,AL-k.Z&Ϙ߼ m~6:GIBE !Ō ATu$ QiDДC)(+̐2Kq bڲ2[$YgX f CE чi;)d)d*F9c\{ %P=?1öd"BU=bksPB4[z ݿu3u)?)g3{+SN/S~9dLFplFOg|I9OQ_eŊat<+"[SK< |aІ+iOWJ$]$ Nb]lepI3ϳ=mPy^B#X}A1vj+ω0eT]w:8/ؗ඄B$V2w۞x?TJ{_4i])eQd4L0`Ę@:f?ja9:"2bc6_ÉDGn}Ӹ"Yv rˁow.ށ`1[;gMM74GW˙YoW>+dY¹;'vDk։c+x{q2z%Ŝgd7}B7SDc]=ﰫR14>Ek>R%aPhc]{- $ jKd&HLUW.NK:e UhcdP1glyٽjDA֗sZ< 1R.8LLe[z4NjVTk3J! Ql 8mүîP S<5f7 $F'/ɳ (AEC~y&LL\K3gQ3Z[hϲ{mؓmxW4i4a$9]um%lji/Ld#v"۞7PzEEab*Yh5Po_G<LԚC30і0l^@djYRwZҴ.xȲIk'VݽDk=ĥcܚ0ͳ(`Ѷ] 1-.g5Ka؅[iy&q#KIgٴ ҧވ?TQԊmy𐼀5;"`kZfƚ)]7MAIZ~&3[i ,;PWl?Q ׷H9l!!վh;@Brb9hWDG I F(c E%(Ne7Sj($b Z' 滏#,do(}?_1}ͮ}qJN Zf݀]f} x"b`wY LnQ^2Hag2bF"m 6xVt)ԥH ^%#Hd%ݑXȈvm S觋(̻f٬ԳQ ;l'U|2W^},/6kƓ{L>5#׋&Ztӟ~*D ii Mb S #Mx!O,^\ 'x>t4-WwF!'pUSN7}q.#TH0@! "bIk?zVx} KͶKY]_|w,yn~lx}: 1~cw稷4ה8bv z1Sl[jbx:߁[{kg&ǫg} -;b[.KW"JgkNmZ2GFt%'ƆvV-h8_&oaQoFz)z:: 8ݸGcHwn&U8ɅgJ U0c#TJxqp<6S=D/tY}ug#hOu?U _+G]MN=>ϿȇSTG-w ՛tvz}v*VXe%+.YSNE In~Z}F c|P)ʄ ɏsL.RudB;AMR =ALPJn:}؞Ǎ?J:BMNK#F Ֆԧդ$WnU|*Ԗa+JV&LUеCiaXtfo>[(}&:DJ l ;[ UFݎ8)\ PeZla:@Q(8ɾәz8x,]~/e3i`<{1 ^R 1joG/ % UFr #9{@Ϊժg<x[.Mn>qZ0v ?0;^Ug[0K{awc`?t-Tw7"Dnl/Y=Ӥ#&&F0ً#/˲t[jbfcl*cJ#J`LP)(*58̢(-􁰇Ƌ%6>PRyi$0iҞ]yie0,l/ʶ4C ^ qwКˇ dXjmcޥ@"5kM=ەgy50b n-3|I I+m w YK9h=O&MXBHef9͈dp{K fM3OM^&˫<z*dԿ^ gAB;={Sa֎ltkiD%[cFyM9yuK`6 <# {e. yu,ݧP^lS<‚2)*UTa5d^ڡoEYJ|<٬eao^$7-IxVX#Q?zT3x`>wfbc.nW95w{$Q Y\ N')+f6yXJFfcx$Eb}@%obB9(Izwe/JA:7yKsecl0`R*(ev:gQ B ;zEexYt Xb\uGw _!lX?;o ""s"jb='()AG:Ry&D$@s ܺӽ CzcC]tL A=x4h2'@prcjI0D"c3)B (2\Utՠ+hZ\|F>K&U[2qؿb #s\ \zza-Z K30\+` W >g <ZU7t˽q/Db|5@l#s\>3Rxfo+~q3ntϾL0\jF/Saf0Hh!/ӷXi z'bOG|0  陸"TioK\ #(2 BKR U*!LIsg0 tJ>sD& 1\%/FsSJ)Ji%@X NVƘ8ZY]h@5/wLηb|:2i^'EˏbϽ]ObZ%ϨC2\Tl-Ǽ QQwgwdCjA<*4f90Fe,wõ;}&y,H[CrA蜦+>F9۟>I<63uAh{obV=1Ev&Ș~,5f#"My g[q/伢a ͼ GŮjߣꏧJ b|T T4qp۷.c;Z P43<&z̨ހ!q3}61ր^&a =ody^akpa^-p%#.R gO Z$VhT-V_ !~֍pآtq'jŋbzs$Z5clzwRʬw뭐[!RoņVk^'[2:ϒkhB19ֽTXrk2(=(k7ss|6.|Eٗ_ɕļfYy[}$hQnL{GnarD'a Cp!T PuJA ;ѽo#\ ~]EY1j;q+mG_eyf`g rىc/%2;r tT,YW7[%jݏW>kEi`gx~4ao~;m(%6]I'ɭ)QyLn;)R>ߠiXn!kLF Z! oŊ/v `Ύ-v1֭kZɼ{}$Dڹ:wuO2688q䫳d L42EN1ƗeJ381[Fo[gY<\s-ɍ[+睝OfrvJ׎n]GRZXAQB+$mRW:J3IAI'V,1$]mUxo6j=qe.lEpdeZ5/GI[!׎Ξ"JZCAlŠ*_ۿq'@ B< )d+B6٠,:f:wAR0N]kĥ.(qmE !S0uw)It};Fl1 iNOyGK3wJYRd_/ 準=RwP2/g*CPUnquU[cd\[?fl2H#L73$Yz0Q<_vuB 4`NeV0=A{?X9,9)3" kw_ :GBz+V#J=#;юSI߸&yBI˧$kW5j#Q`3i"/18p RXHuqֿWAU=Y?P [^ |} ^yR$3xArjlg>G BrQzdY&\Oou' Yp:Gcgry~̄"^ ^3F+ %o-:HC,C*8Xӊ3bb3yZ%'4?75Mq`dZ ]z PQ1<1VHtA׫׭(9ncoE5uXpnܓT^IO :=ʚ 5ӝ%;|:7.AtƅUBHqe;M遧U1gR"N4ք C=Ka ^'""5L-NRw~lֺa&(r E) 0t4@sOh_?&nHSv Bys1.F#hgoVQwFJ e8F UH,f?_$.ŢMn{y']EŞ*O SϿd3~Y޸ ╒+\ʂ@31= ҷ8h'D Jɯ$e|pP_ ր*y\QQd И8a!.Z*n$i0J U4n;K2(ؗ`,V줫^Fh\@O`ZЪR&-;#A$? zSq'E9Hi07*q'dQOdGjdp`E7 tp׌Vԕ3x;@*>lX酖lރ`@h_VY!:ZЅ}N,S3~?"tTJU%K> rPÿws3ŰİsC/q{fvIZB1Cq`'LvR^K. AL1vqj6D#^(2 Aucn_Tx- 3\rn=!m+?i Skuɂu/fd(մ~NL%qv$ +8l-5 EٕPar ŊF{lD7 ٹ=_un(1 ,sN:&;їF=N']?GKH0o0 <n /y'5NBv}3%i.A7ˤ$MOkӻ9.MRsR5N#Tq3_<,a)ȝOD=J~B E52ge [c7WՈZqn5Xs?Bg8ٶrOdw%6~P5H:NT5^i]#`; ZwTJ*>8퇶 %p?Bwwh83Ҵ H3ƑqFuU MY5J)~.ސN@ 'e G'o!kcpvrMœ:bڴ}-^=J"R%[m-\Li^A 8i\tTS;OTT<i(I\Y@\FXxMky*ke%9攡~_ΠC/kk=P蕑O^jqP;_/GyftxgY.e臘-R6mc}J?G)fhL\qAzL_oPQ]o_%f|rsя\#$Q+/2`):>gR4#ޟ!2!I@>zAT꼁d~OI\̉ dIäG!m @o@"h[o{pd*q_oG Cи3XE>}:$opAoU<`-N!}Ͽup^{|5B qIP@2dugӵ \O+BwaH%@Bi*r޹,} c餥s[o1QpssQr&ٰܭ~I~06h6/ؙT9cズqwx>^U:J03'i& yI;OxQKML%[ُe A8u81AՄCcL/krF!]gS&4BRߙ%]ɮU5T)/w 5w)^9VDZ\B{}a*qa @1id7k@i[;=~_|z!b^ZƎ>1l1-sgƀ%1I'l N8ؙPn{ "X@Q퓟Ƙ$ATLjOJ :sF(d~rp6o>^t 1F-Npr7i-w'qXe}uk_Ԛ%{F=|E%O~yyV2]s+F;$-X`悞؅i)+"wjJt|`YŬo>Y5k#-zz3@rg^1rИ'NyϽx-@H)O:nGQS_?rh0*XfF)f{&sDPFNNkoKtD{Z˻}|GWJ11 ,UKϿ]]NJY8ip 2q6`P\+_m5 g(L7>|JQgӓ1(9蛢ł`w73UR 6'%&Rӈ. Ay:pY1rlx'N1wͤBs@(@XYI:#7AFvhΨ+HLgRG ^?$KAQCJ38 Dƃ-aGjښݣ!y8c| ]N[#Il75sTlc$ǜ޽/;!z1/,76bURnx;MX5ž|Ճ1}T&7#f6Fv!N'g߫_o߼ko@ܙf""#V*9Q\>,#Sԇ((@Rr&vWnnͯ{~~sCאҀSxP+~nUX+GfpuQ4+6@z6Є^`weu:Mr28hu.X"Ymޢ:i,Ïv2eG] jq 2o6lhk(B9,1hhgU B"mΨe9*}|#:I:/><OũVqlxMsO¼ JZOeXXPɗm ;ˢ٥FgBac@R$a)Bck$HfҦ3TJ@z( \P/ߐh? yk裛n% ral 9ԑ1Ss.;%.J"A@Ժ%W+U6&{G8tV $5RBP/*d1YyR]Y@jkJP_eXe*eI0V]6DzӺh?&r7tyT%I$+a`#qHjZ / $SRQZ FɋF^ܒL܏$pӏ*`"D w^bh4xGIa P HD2+LKQuh EjH&Ȱ &`w^Ru9wZ:N#^C)v}?J] zxxNxϟւPLNe:'O@AU1}^t G #ix';tS#WU @:)\}I&p[q? {"^\[ ֻz6.l?Z6EWc}xu&t 558"HTA9d:5YGص8hSʣDb9$}YiK*v|Zx%np&8o/-ʝ /(J9w0;wbaIC3V> Du#)|+<> w ̷?^8k!12̈́Aw‚W,b춿.z&\ͦw{ޙupF9({~v<W X^SԝQ U!'`U| sgyX0v`xAcFG/l;+PÚ%ÉwيB]cLiQD4QY we, dQT : ubΫ;k:Giap ' |'bUֹO#M&:s㌈QT<Pb?hpR3A,9 >sbdKa \o1S:mlu:]5֣:G V-CT.HI$?Kڥ!pwER%5uH, xv>!u;#{JygbWn&iB_luK-m_{-@KF /"w8OS1]zɫ#`@Ç{dko^osÓ^9^(Gy#ݦsA}SiYy^[a\EfFXu8a("Q@~Ugi9Qsҥө^QT`jozd+QHP0E%4~ual8lf,XF V&,>8 Bp >2]߱K&>].p>l$ۼ4N]` n-C漾͋ʄDhzHq͆(كxmp%MW7~#R a.*!>!H7*c㱅[%nZi2+vй9]Jp0zgN1 +[H)E F`#-Uxik>]xvY^VG4Fms4ATH}љ$| 'ge6"͞5 nDOE)F uycBy(Y-Jޣga 0lo6v";2,5MZ!uGNaFx#0#'!’F߲L+déǖTHRAX0e,lAv8+S$80GKin)93İ"`9@p^xƐhGuF UAn%XRb35j^3(x/: Řr|u~!ibȷq~83Lpt%GH }Y863 blkiryϳc0׺tʁ&myAj~0Tg/y? ϝNaM1WO\e z4|{ G&vrUic"Zfl u (DZb<R'T[ztch3Ssf2{`RqnX8:@}-HSc]S0 a#!#@o'<谉MBnI0rI$1ݿoy.@y5$c(vx"H7!WdD opƢp#oi(*,"y !%pNt%'HF*6mhaFyx#lVAyo#/@ sĝ*"cf*tc^Ƿ2)y\1@B "dvZ8K $͸0p*7ɅdMZлhCփC*^M79wV~O#>&b"d^v[d*3t\%eo2폧G/$}ÓO?}uYNf.,qKcꏆqy'N)\^?&aVH>iԍAs[hZla)DiX)P)_1"""n1d[[x_>|r)]o |%)Lߢ"z+eQ; -0@hb"F`hL} [8@)c<:.jm*v?a ~¨rd%F5 + !=[0%R}n;s;-$ Tuqp  4^{ew!)_iB1~}t;";%B$ 4ofx4g/F`@EFAJH T"LT*AŘ7")eƚ鍼hލ7.68jhxsC*뫫6Iw_8AD"{U /VI= Y3y+&Ȯ0E68JPMvd&{HZ{pYNsii.m9ͥ-T@2r:';LCaVH(T)bqFWJEA2$ 1fC_B^C_mRpYkwA`ǟ*6w@𐕨Y9Ik8yۺ1譻-^Q?q/DdY?rj-Ɯ}c Ƃ[Dse!܋J@[I45qf Z &xQS<_m!e]4It4:/Z2 _׿W:okKRMF}GiZd4ӗ v /#tr~bncxjo[Ioq//$ vK-i/`,H9tT(<UO`u\+Uy ~89kaw.vI%4}x1jzϤ3㥁\7}w~H9X#?B50NPZc4f|t\V0k= AJ`Y d3F922'vԉ.zom`mH :TN¢ɂ50l1ڰ]ۧXΝty3,K0&i /ې, j)_X9 QF+#U+^Jb'GvmX0SW8$~4xw >r9wYp˭FZI [R`]7t *IqׅZj,c%[`?c^#\na,X@zpJ9_y/V(~b )DAۯJs*"rDt– ި pV$I}Q"U0vp bIy?x٠^My,c99DֈJ\ȑƐWuD[DP}o$84RQ9@!ʱ 86lm ~)W8z nV@]4A QŻbSh:/?cXW'hO6?l~+׏$ks'U$X)dD:O .I[7BU ߆K `́1ѭZ^wEݴWRT!ТiȀ ;IQ2L?|8^+gRpY*1FJ%(<ԕ=g"Bv1&hͿxT]Sktˁ^Cጽ&eJ96w6ٵTtr̛LJ#2:CT,IVMM؍Fy`.[X:x!g/͢~^-;Ql)m]'iBCSC*LH_׏jU7yFmST7E7-XRb`.8t_xuKvPRl~:WςCDhoGal&T>!@PK58%`|(zކ(wDh؆ˣZ$(U7O[4NW 8EIZ]^t"Ϋql2c2Nq9onϋ:P#pn=6DsV+@NTdj}<9~1 >$oOEns ܦr"j`]NKzϞ79п>ӌ&; ʻ9]ssWw c~˭&'i2 3D%h&?ӟywK6?F8g~&0üdzjcN]lzg%R,|.2lc윫'ДM"xMLϳN컳ܯlgmggѬEtv構XeseG%}CTUEc''cE˱e҄DKg}13J fpmjR&st?*54U[92cIpSVjOméG. NWp f: y*A|:?[DJe8/4IQYyw"7>{ly{Qޏgw%% ?d7:=ó}c٢.Gw S  ~/%8Kmv¹&:a >q e|}>߆@]{"L J(Y9\S8/X hhppF0 "5ضD'dgH`1SW|>ydA3kyLV*+*G)GGK,fڱdv|L? Ap`A RN)|LuAUFf 3GP_ؒX%U oNT(f]iCpKB`0<b8JÞ0 l[yPd*ӡ<t(OCt%HJI"P cV+XX%0C g0~'GpۃJN\F˸ ΂C8r|"lIF‰$F`Q Ҏ!jMA4Dǔ_ǘIQ$K)E6IIe >IĮ Sp U9[)m.Ż0uՕp[F-hӫY+wi>Nۮ8upuEI$hك}^*<< (OA EF1; OI< / ?4 ,TnLt 8S8l;8]ťշ!5"xQVq ~] .eD(D$)"1EnHpB1 |R%X D2DEM|J3͈Wb&{Ý1xRH'l'OIJlD.AJWK:LIa:Dèˏr-V,6#}0)P良2y7@[E=VҸǚJ˚*O΢<.i"ȩ$ e SmCIWJXbcvGL}6- Kd5_!!ۚ\7:n6_X3[t`e E Zl|kduǯj\Nܔ4 !WxS( 0 pNFF qFP\ 5`ۑO㑏 ԓ%a"A/r(0΃5R|JH44ɤ[lZ af=6&[-ŵ|Q,kTNơY.,cC3Pg|Xi&4NTg졩=Zv$WG=h㜦 `H119ecm (.>NͼsIק۪-=ЊĄl]gI; 0JeO1kb(*{ =FwuwwČy!RQGh^ gu5{ea(!BJ.(v+1ѣm2~`^9`rBF 1R"pDɛ"ZYyisty?+!΂+pQ]8>b,\_5Oh qG# Bcfތ,`,ievxWqݒOnt<i&Ry <)Ң"` VBȗW& 9cr  YC%ȨDin!%74=Nͷ|AhD'yo|vv@pEz|fb)=.<ͧZk:k`}3^ _&. IT:GJ$ 4irZ"Zs˚iۊrɡSb2zMI*V?}oWQYK}4dH&vlE܅cd#=9guxeWaUB-|% ~5,rw' f]6^P+Bj*~NDW>MU>b1W $6Q>gB§`>G^MAqY~p`O31cW2MTKɊ]u/iE׫raXG)&=JAFGwT>37<4Y;I5K{tkVԒl,n#Qb?&(kZ/ K`#\qJ0yS=oYW{),".nOL^-%Kt%L䜎jPM]qq Gj>$~@X'2XSr" Aiz4(a^tx\K#>4kt)j`vȾ8"Y{ kOƼ\fuFa C'#^ӸASݢ))~|9%W޵$"0ЇAwnai[,[,'zHVUEITn_<ל,YΙ+McE_0;7t>R|g2S l_FvkO7ϼ뀽d]jۓ^MkxN$Ӹq÷hkus^:KSfyjT'7O$mW\lGrVjuJrV#k{eW6&ź\ǥ[_zHgwi~ʢ<]<{faGĒIُu'čÜydv^?ʱZ53P}wlYc:Jw&ueobt+S `cu,q_U裏~ǛˋY? o=;ߛ8)8`oşC)XjKZҨ"y}^E!,}"h[PzŁzYo"3Ǔ'xwsce;% 4d:t+z98J:' d Ģf>ll>a4chӜƟ#9*:i-x$-Su oP9o~[;{,VǝgQgYI4객QZGwu6c1ubˆPz-,uFZ1QSF:Sɱ-KHc`;b[1.j5(&ցuF.NϮc4 f7L>igmf'vԪP p5g :il +O+s"YsƳ4=3̥.>UDʪez3]ЉPU,Z;;-bR4 o7BECEuJ~Xp9s.ײw2Z߷_^sN|ޝ~cs:K$L.s RKevVf6iЬ_  Q=4ˆl&V7s| /ug:7Ul~Owٮ>sUt A3}3} u;ԫSm!ܪK:կ~܆l}^3Ԯ"mל ;U&sve-%t6f;ge}ҍ>·>ளH4Y/o&o+]PgzY|Y8$7ՙoއq$XS4*X#({]:,g,lb֜ϐ}aR&>#Ԯ Yzеw6HY=.lے`&G ^[֜WЩ{Lc˗c^$)m5_K3~_ٛӋOlW߬5n(yg<_y_(c/'^i NuX#V-]`=S8 KDŽ|yiYK%@]wrT`r|e ڱ]:+t82+j@gѹR33F8cߔ:jyrGk5OޛRh&kNV4}?c;@6LIeGi ipW^h+:{Z67cXP-Y) 0Q`Hxen{uO٢_+oESjN>~ nW]7j&(I|@-MTz5goW ;u9E]7u=S%НњVO2^g&U7ޭ.N]FfKT SSԵ;Bw@!R>y+ҲbhMw sקZ,dt!-z9;~~M8? y t!gPsw F!XLihۀ HMlZ?AW3tC*a:jTzUUrWNXd1jWΚ3a)0Q篟6(6;=!D`9'{0+}svTą4:]G2K:GC<=kUGMÑ'Lf;3;s=j}rLPvKHuY;߭ߴzgj{Pc':Z.p(\C]]}zku0'a6 .y^DrLEV!*{W =hPu÷}m[M<4 yNiOv>IT5MR$?P}:qn#N|zfş[>m S# vB˶*V8mF R*>~Fx=P5ˋ_l1 &Dl>1jFac()XMm<(2QYqjz+r 8O!n`1!h;5 ÊjJjWCl> :%6VJNuQ: tةר@gAh$is[E b*Щ@NO۬vmʷEb&f/00NT3"9%%RV[*Zg dY5F{6kT)ŀ)bCzA_B!AD~4rg M<] Zh+NX׎>S.iօ=D,OJSp}CB,"(D" 52#$CP,՝oP[7͎G?i i [҇Ck&P}dUvEu[9N:>kBlLvLd,ű 2+tZ-V~`mf%uGԖ>@sk5DE)?|r[l)?~9c?1}G (dlT^{'fpegM?;i XKDFzIn[k-#+q]$gn6^&l4&̈́eDհ-h8sʌ&Pf/g0p*KВad-x!y0-s[_+.){'x*11WY5:Hlj^FheƜ,q}& a FjdlP ˂ZXJ v2s#r@N!`^tO>}r7d ns_])+4el mUO5 l\#u B3EPFc{VRODkCE1)e G/ع@=b $H92mG[&"rnC F 0 cGʎXN5g @1ڣ lU kmaMlud0}a5 ( }02)o ˾Ep \ǒ525!U ;05 ]5!JxdM(F#/N_afM0q.z @yv#5[ . "#D5;>jQ)J?.ecr+=h ="Qx߆|>|` RDoȱknT, #.agi[Z'Y2Gc/ӌ 4_EHR{*Y0Ĵ$J.Y]~y޷57vyX~/ʭ@fT۽s0V5ogZ:9I2vzɍ`OD,`E,sEaAưFwIflgDGbYdJ so/7Q/q[lIh5dlLY<ޱxlX V`z);c6 6 E(a-BIG]iN"9NnUi#2&2C6e険":6ӾHdJ"!n nҫ;|"T |o)HӌjKMHF65DCVg3SGG&-:1FTH20h,,>L@dy^j'dTk|g]@o9Gn}{2Q3Ǟj,*I02S^JEʲ gWQS8U0(^ҳKzv1={۝7ˮlfte3݌lf+$G=$˵O)_tي~<Z6/ bwIM3Sb~=+<;7,]_%qcË:{3i{;t:xw~~r+g_|i_ @ջ}ׁZa:zq^N^ly=jLyofȞ7Rwv\I/LաIowIn>R+*b3fD! f9{.vHrkf,]:Br9Q"]j;UcgFzɜU|w6e-TbW.7/+#U_O5 @gS^ݺ2/Y7+V)X͆V'"FoW#6" 0 }5;ONtm Be꟟ά{blWj($w4BOxpqO?Z;>+(kgwiKs; *ʦ K l-~٦YivD sǴy|| @aTMHk :c㺼-v ExT&B".M>6HZEJ s%HP-WME1@B;3< "vKhw'R QFRiqW'jm'Q'OST>= Z/U=)AԠ wm2[LKӚ$?ad9в( hrbHT ډod/w9,ȓk~D:c0뒄 }J?ucwzZKhN°6;Q\  C Í$ D *AT_~ ^%P1c:t)]*a~7C yԾ3<}\y\@t>#AFsXP範:rĵ7 u^IU*XiGD0:/yI@t=>{]7G= ,(_ʗ~*m˩o=у #"Dg]TؒC}Ko'G-|9,@L`܀s:=1qH:ZpoqﳞTmvWv] k$Q]7$k;Z 6ـ.#H(pۘ,Z:pI?K3o:1g_MqS9ՃIaڮu҃-TF{ }Ok,1;UQ$Q= B"Yk'=\v[*Iŗ@0:je񁬒be/{*Ұ6kt%F7MJx 3n.:O.~g .Kp91` r6,ř!S4 VE"AJS;]ABZS 4ԙ29uap'PB@ [MƘ&uʾ`FɟSIb6fs NEly|vjcbYB #3Ko$ܾ;xF:5c<~mbzsb E;W17WziIcaY8AmxWS J8s$ w^/ @=3bWۖM,N&!d`q0.:VҁVNߢΪX\Pz{ +ՈHaH)[ u%Sm,Dsdh}VXyD J6j}7B\䒲+mڋ[>OlOEhK5~ Ckjĉ] *]C>KNޖhkA`00Nb!2PqLQ*,ާВ>/T[U]AgcO`TgNdJ%Ҋjf˔Yi=i4RaB[kh.}e1{E 6 I*~R,åגWre#U*nT!frkԒݍolgwwo%n?ZC7?:L:U LJI&7Z>+X`n5r(J;4(X9=KI+p-QGQ]>ЇU'7oBvi 9δyR" ^JQ/bE٦m6zb5 U W:ߜ>ϐ/[~"lip4\NbFS ˱6jOtkcJg9h?55cU|Ɗ:xX;Kl"d~UN? GwO=裂-)Qw̮pÇHr3+Y ƟCaTP~w*wՐ2cWQ[(Xd*uQ_uaҕj{eQZsl1SB NkU4F(c^89ؼ=;E"6n-Egb*DjQ^TiM"k`-ĈZ (6gUD^M*UVr d,!Ȑ]0뢈Q?MJ!>)SV/ t# 53ILaR]VZVWz,FWRJN*ưKl6<὇atbg}jU݅(,??\Kr9 4t|0Nk{}:{폏͇Aq}xl͍VcN$H${!ͤCy\1╡&YM)]7ȤCWUoA{EˢU6FzS(rIf)b]sߋGtAy7A֧PWLQ̦T))n=j<5F1(T&UAѡ*=*y,v/óo"ԡbTs٬sT7c@'o2P5TB(nGe";^( 1"تM x*!dg ]G]jv 9" ~8|{+Ͼ8d?}5"i)vBX{P{wEqXøRy =S velydhMﰬK \Z}V_jF֪vl-S ;$_ lcXk:(+5AF3Ԏ9:5^_WԷ[NǟU~sL+AW7XIy쭡F3F4Ie3E:Cx,:ݙý+8JY|7YFH9_t#Ui`1Fפ;fсtY1346(&WpfS%h<嫓)t]KhEZaրUV]ggnm5DtwT5DĺjˆoNozx_/?ؓgLwY=6l߸s5 ޵tt㗃t~c $[l?=~Eӓ^?icN?;6Y٫1ߏE&uOv]ocakIL,gف]:}-ƒ-ҷ7@<{q{1`pp)^ wpd)Ћ6զD+s-V7.bq1\H+1O:\! e {%ZJbj#]bhWt%qJ{iڪR⺝≂2;onU[%'CS骊!Qm-hwE>} ̚ ZǧٌnmSL~o)[1b&O< _?5Ah\C=('DP(Ne(ʄ|.OިQΉ)bׇo"*w9-U?0[ٜq3C`ǡbcu߬)Ffܡ9d o1`'wjtg6hw\'Ag8n{b: %Ŝt!6h7!߼#/svޱkvα < r m=ݠ3vlnO64 ޛ7}6[Gcs6hyBGWPߘIe06B0S7 avWvaon_̑R96_[ zl_ǫ2Vk>ɭ :*`91f\ ȭ7 j%ckMV ;w5j{|d{ݫr쳾w^ YwJ)*njyd@ȓ ?x愶h9@O -(y s ׄWZ=GmVEc}eɬJL bK85c'!F|⩂^:NɕRT=Hֻ&FzPVٖ1x.J2[}L J˽C`-bieZsΦ 2J"B%m irpn͂j4xhv~".Dòj0l0qndY'~dMZږ8g ӴHlx3g }z}=/w:+Cu/vs1( ja妆Ar)#ٚj9T*kZ Bլќ+U?zopw]qIm1]beuOɉ=65D5@́~&DC1TziJfD">!SKk':FHET8P:hD`+f$,KGؼ\P$5T`f6`>>BA2w=L$ * #BIynTa0 |0(+FG??=mCMch[5C(u~'Óq[sN*|HQf볕UEUwQ'YUHwP>?? "I@ΌJDR"p5DpnĂQ*\zQ4!Q4)E`SfB6*"]E׫Q*AF_-䚧f§T]>`[gj"aOL9AE_̛]⶯Ȋ1UBrU.WTvf5EM>p ⲫlLvQA2d4l$ 2V_̏XO8|p?ScR`E(@% =V" =EM0a}:E >]%@k3XAgm":!ZN=-l,'exrEWY7(\˱6fݹA}cYq䮑8GB99`fi~!6.|y  $d3:DC%PR&ʧ$I8;b,TQiΛ5OO'ŸUAGZS+Hjß8%GUdRmV&ШTWOpdi:x'G:oEO{ߞ(Ky g}Hhs];A{ϥУ!H;bÄF9gxmu] NAyY=` xthjCd檦" a3[.PPb,*P/2O:f}=~BP y k8o84AFZ"ZWfIѨ =y45K,9Y ۋ=dE {I=Zr.דvAl&#{=C_qތ ?5xz$A/}o!`!"ly2Tl[bTg*=e(eYF@ }'Ί6L IcFk^,S8bhb`e@Dݨ|r > >e)Bh4Hv !Y!SRwRƤdҶ7հڸ܊3&5qOFJUۼsMAs{ރ<7`G]L>WKS"!y2Li;I#D/\t6PD UpKԠZ1V8Ē8Fo?珥ì!wa?{]@KܔoO5_ߎyk5WV^*3Xd?;5/&6 `Lh 8R+JNV<0P;s _S7vVZOvkﰵw8!Ы~TQf[.H~pIF*󀦅A?赀?=׏t˨fL&UKR&ع aLU픜kW]Hj%GaZǜ5wk^aTq8Cq}8RXsO="(W>] Ʇ1PHC o cPrZ/˱uf@&mVH3ecT P5}XTj2T1~>pȹl+Ie6JkpSNjW!`qІ?~O.SpoB^RmጒXD ^TpRA;Xi&e"Xܒ'Bo$~RWY+*!NW94ۓꁊ{ՋecUJ1Qa#^тD;jp3vd(rB¥h4%{nɬ@P׉Nֵ|fo o=P/lTo5/ơ@5LI;=J{:uP-NI'ao'&y2KJ(%E;kdPt,O!{*~͋ N)Aq",U<7fE##o3V hDRz 2iV7kuJȕ)pI? .Nk^QʪMXb􁒇{J*= ni5/ 7nN`;x%jP{xGY,x'hvTk^EdjUp/S=Y̫K}izWKs蜛m|i/|y;C߼ -ADd6o혖oiv9c h_^Y~bhv;~gXR Kz㯽V}qPyp~s?ܬlbr>_fK[ݮ%~vu{3~ _ 폾Uzwe>toßsrMB?~?o|ܘ}| WnXO1CSԜirHEL@Fa"ۄ@eK-Y-k߬d $[d$j 3e*_l벯Iˍ$GCI!qW\PS-goہ@#w.@ZM."4MGhGkͨ>cSJXIcb4QMC_P;w/į*Vq'@dQ*nT$Ujwa{JԮ(yzhb[*1Dn%:~=J(:дm6]zc#ưUQCTpeSq!{`2I4v5[tfh.V#q,%E!H-0YFub7v? p&LlRi>M̻'Q;y.}6!= LM5Lun޽h`hv !q1h@u0w!vpLFV2+Cn%NR\h=ho}Aa^< }o+2 K)x^OIsVϗű@SL}$_ 嶃^ Z\Sb"`]A4jǽ#^gm?Y NdQRSikiE;b]Mt-Or <{',l7xokwnVW8r~K!Զ5t>k6M%4̌%MۭeM8R(12IJ'z;j+ԏNTW6#v)¬)Jڏg[TWU;U˴Հ=96XL@['QVS r[L6;؇.!Is)'2Okjݮue1 eo^OCɧR! Lt[_YrHND3@/rT{|~΂Q/,J eIJxZ]V\{XFpcOQsH(IѺsseiy+li㻦pNy0 8Ʀ5]?mcx<$Й3Ժ=ÇC d1ec;R?[X{__Ns)M-"dpgd}Bhid >`p 9FygO׾^|w[]\E``%v;alDRj($h:Ҿj6n oeT`9@݅ l$cQKE}uw.ԁ#YFeG~4=꺮g}Y܌meSqm&؃gr=Y`JES{BxfäT//֘ǐ/YTdz_I.;ORtd[CPM;)"5j ZZ؊0m_}: Amp X`%hl=/%U{T6:amnm͞*aY›mjC/{#eAT|tC Ŋrfz=д/yL4KSД7EͨSsZҩ=>sfyv ?,l|hN|-/{w#'}ٸf/I1\+V0`[~gĀ(~ ;?1?Ojv'{)}[e~j q rX(b#,֒sZΈTrk%k7>a4MnƼmC98*x{Zru(s*=ӺֽJr_W$苮֭u'X=Ӻb@o߱DY< Hn=8HH@M庣?קc^L DBݹOJ;wӟbO|Y' -|l 1/'S-58FxzshZrbN`v #5+Y#1լ>CJ92*BJkV:oU=سRgWxJA ŘhB;`J9AV{HլtyڡOj3U2rLlCp;d Z!z:%'L!D3/gIêK]ԫQ?b+ͱu=VbMTb=n,:gy/sb/J99.60(rL?]ԾQvkɉ9&1QZz=o - }:u;)u;ng̱mjdvݙqthi+Z5;Gk^* DBJB ^Z5ķx؆'XXiiSnǼѐ'CS>B :'=ATbxgCsW鶉)nƼ`R"V=y ;'wˋSɂ@xҢĜ,nU+m57lݾbn^\B0AK8H(ag3ĜJ=Z׽|C:wH,;$J`ݥexc R} U=P Ok>+h1.(BG-tLVڥf蔨OV7_b^NRӣv%A CvOJ`K+Ѫz̕U)[i&Zכ=^}]R/#/?]ξJ$/wWcW}S9]M^8$)~'h.8e9̞~{el>Ti9q`7vߺ=]W0pr t]vaWh(sWz]P?-goGm-܍#;Uހ*SARӣd kzmӣQkztԭl^^rWӣ)|`ZXP'CxZo;tPjR G+uSP;wZĹsɭBb0'`8UjW41cvGN*KDRٹ0y)QJ*}D"|{NB1Ws_߾;o^|]Rߕeh8P+Ҥ> ͘Q9%pA:oC=Rd!v}}P]HIv>؄tQpisKmnl)k{-ƅqlFm  Uå$z4/p؎U~n/3/W'lQKyM1[ʡ|Ԃ䧜K"ɏF(ւO-LİXA,eǞ)7wL=)p-Ԃz+FB@R0PV)!X=R|Xw;ӲO9A*TfWz Vfw b_}C 9$&DB3JLevEL2fvF:%f瘧up<RJC_sw$g`**Ah JvMGN܍씨;*pyF8AjpJ힔-!Uk ح=M-HJNo&RAR]pQ#)Q;;&#I$)V=J6AAX؊Rn> U㼆qVC՝ZՆ;Gzf*H "h&GkV==r7.j@zgu0G/Ɣ eRvaN* B)H`v!Z|;s7)Q; S_ V{5Q{53)^ltݮCYچzw5Ǖ]W:tQiHp2f;Վ!FN.2o] L^]TmBmƓkk(klǸfs1}KGBgbmuw(EF C9 nYD?EIj?Z7_MU؋FoXRJWC P:"D-B>wNRv>A>ETD6QkM jRԮRi|K!UjwNѢ)Qw\B'1OVwm>D\w swHX9Ģw s&nJ*Ʒ73 'E^)2a /" *AGwo/nO\|nϿ^Ыw|~7D7G_W4v4X3>8O;gigv^KOͺ򞹌Pbq(ɣ#-fH[p'G۾%,ȓGwcZ '<10U#r@λjXsȇ*;fSz.fgnc^mwJo-Havٝ#ؙSbv`i,'l\Wj A\1:}yǏ^٫7ˈp_[(jn>wۋϢ{?}ZSCH04cNctxץ'oI=io#7EЗ̄6@ht{fEtfnèӭnYr$ }KKR.T10T}93&\Gyd4wY s"I*K Bbcg "UЏD?&t >KB]p պ0H ~ ѧ#uxm qЏ7>2d)TU:>WI]klj΅A ]%3*O]Ӥ<'Վ.bvCe Ϡ=0\ ʮov;(^u^AW["t7󙡣ab(v@cw^\dž%DUsZ/Q8=DbSDUn3Era/X47;?tgq.Fd4 t{~no@[/>G,&Q߲O0heow[) d4i|$Gl[/GC CigA(DǿQ/fqf^O%M~f甥s( g旙RLikXeOZFqi0lsxWh!oG;88Utt|TPgeVXb׬$>7owcs|6X;eyI|i޷CoAm -.P;5__M$Զ]pJ٫+Z:]6}g̦'l)ičy.LLXc01lQ ג±ߘ޼v|ڱU<1H1%e1ó2t脦7c!7lx2 !tɺ@JasT:'ڎ*wv8!p;U[teF*ܠaT:B/b1%!feWMӽ!M 6W[~]8ߙ$o^ a,whK[ n0B' VAg.22ə2`.ňAYkmV\0;y|ϯl~~/-̭SQC(`|vq?;|s> O\if_H)*B h-TH #LM /:ǰ^M_8pG!RO_:g%j̘)ŒUa$Aq'q] N4z xP+Q$Io7&PӔtEǸsgWwk7oSx^ YN"Z/薯yUw` K!K"hhktUudO18x[xHʴHzm鈐F5ў3c V]#m%i6M0jv-b *u .v nj$?AƓʉF1xW/lypPyȯʹaFl~T|J򍟒|$)7)Q@81Y)&"%I1ԑx!3*lZ䑑HEF56}OS}VC'!XQQܮ`C?J?)'2x70&dƗM Ig=oTk|,9J DQ%8*zcʍ\)"5> d?A挸;kOO^]'?/)N(!.OIn[}\\s0yV-????+[uZH70p`2,*ؘsfd]o7WeYe  8\v7Hv$0%Er$Wѣg4a7{F3lnWbXFKZ&12o sf4S.IB{ (A慳J0 3)f 64(`$vwiI+\!d~۬&/,sc 779.QpuyV񺍾PeO-ś~NY ֡JDlV>^Jek^]5^*cgáSlƙ;02)jv KBv`D\f!vזgF@VuMF&lˮQҀɋ0P } Es5sѲ 50${d- GkR"m5ّ)jƒ%ִXnGRLXnو,CEx3g|`LТY`b(n'2M,R bꣷUZOX'5)\g2:jcTNr $E'%GumG­M mirʨ۲U%@Pq*V`\Nk4LߚVm'dV؎@XoY/՚"j(Ş?%J(qoҁ_,*0"ȕ?|Rbe;B ]6`f,|0W/׵а #`c~-tfݸ^1}um.ZlaTɎg?]U`$O]r1{!O,a?:8r6Kr tgWW%{i5_Z xq8c2geql {0RiӄEP/;Wm 3:L0-y&b Q :,w9 8FhGoPVXxJ]i7:RX (Sp7K*P3W`0]P}Avcض}0>7;/ \MF8&#J)H}_.̏AXs,aZK~~2Pr|':'j㭤JDzAɣn1xD [wfy@L~JzkV4DJ\;= G _iLuIVG$ }>c hNɠRN t;̘DIZ|c)IU`d  !"Ѧ$^_ "#o? Ah̷L(Ŏ\lL;Op}G,C῰4NU@q#z5 -OePKKͶ}T-jp?͑E5z-3-PW0~; T6mИ~ɘsDzeYu^t pp xpIFqbkR A⣊q@ah+${ŮG&Bܻ / 4Vpn =jB_OG7\+|hUe:@3D-cNeƘc10 ؤe !;~^?T ߲'RCH.zhE_W 䠥Uؗ (8IJxI $1I1-L9Y9d)P >ʷպfHnn}xAyrVXD% qk1dɤYAi']'1`׳3 ~~gf4H뗳xwK|M׉)]ъ`7'H<g^] |}oi-ߖX{r*+ !EF;0#1FeD W.J !Oؿ\:/b@2{o@:D)EJ"1/&K{A;f -XSj9$T֚qV[I Hj!y N OԲERZ{up7rpZ*r o̾0]oo:]h}[.-Zߖ oZJ]* :c Iv $BYGDa HHUWw*mPS֏fkθX 1 4U0᱂(6=Va4yX"c)w)IUl' =%/$}2z" D#/VdV4(FW&-n4I6%J[f|(y*4RWӍ]*@7V3y )V\&]-.PH WYqY"윶^tL[Fo ;b%T&jc30"%#g3FKo{c \$/۔\2b++\X!)} 1]lbStw@1;;,ђ<)YjQBBrcr&)/SHɓxx 2+Ib A`W)Z[KRQD,֗gεC3aB=9'׋F0~Ѓk.iP6d')Dz&MͰ3:0 $MFH!B֤^lÓ " Lz{u2;Ҋ^s~f{ҜL Kၝ(އI[Y-L VYhYL\L9Vx/{cj9Jb9XPD ,uQH>~'ٵl8:n!1דlx=fkȞhx x{u#(o&LY:(7~E@/kn+jpE^$Z&P6])ޕZ! ^ws'~}=5n7 yGፐK[;9!}M{~]mKLsu,\5x ?߯{ZVuml I=sQzOQ)*.StTKٍ <l&Ж7ZK]KQ;~lL 7;Ӄ zk ƒ٭_ez'<޻7{8Ϻ!HU|R,-gw*RRʖ#SF& G˘.: >JvRJ}m, i}凹|SEdG9w׹He:cSD}z|&mO #sqǨDQÀcZ,=؛0M6|ͫkZ(+"OXc%_:m#:VU"6⸇TRM{h)8H=Ufˏ.t㬳ն]Oc͏ϕF\ ˬbeS"RK-600Έ@yCTv&/0-N:!۶lptϡ`gjcZ8ְPlwƭATa ~ʴ6rMS>/6w^q:MO7 0tҚ茶CKi1$>Oy lVHwfEtngV^]M;92\Rd7\udlFYɔSktdJ1Z&hE=әQ|_ϖd-ۥvO}M\(5DW]5q0E*fXz%M;#͏>mZ$SL$TF8eaD۬r3 n N?h ^\HgvS:K3Xqim9PdGtفFgV.8u_jnuj2L'(TM"e0uL #06:sni/uL )JGyAsh|1?G~C:[w8 QX/4aLc}HIsuqCkϙL Lg͈Z`)Cx"3PDt#Rà ^3mWa.^4/ȍ?|<;?/EZ]-d ^&i&D4@kmVȇl/Þ$- l]tw[tEIvm'{[q~IlY3p!u"ץ: 0."oU0P; @S:XYvtYABvvSaԼn)PA q^˺oQ;=]Ywغ[QgWn9 pyk+: -9k >;c7յW׫^]"{ HHoc"6*En[Xem,8Tz]x#W^oD/1ԨvOz֖ s0*EH?48oz O||/^]dB`s1.(M^!B(D-njaHքpGƀq;$"cW#|"A(Z]TɽI^* +kYѦ W5Hu-[+%zK{NHZ߫>FƫyIx]*ltm!S;fy>l$G7y͇@N3Fc?؞/5WN)Kb'x o2_/dԲat7QIbpmЇkR)e"V J*Ro MYn @T%jN7u9l`AAևo冯܈ ؈8t-0YޔRP$%87& юOl0qG7}u[~m%ukhZuzt-zhN.:A{E꟤j܂x84ZjB%li=k,toN_>_Js\{'vIHNSi  ~፧"p{^5xIa7vqcHajO=cSh+1?JtXtxұ2v/A5T[a{ξNZάrqtw3ާ/Q'>i㿆+#˧qp<58^lpkjtwmҽ8z؞|{ջM2c]Bﮧߏ39 Sۡ$>Ny oop.RC;~=<2wkoY\Vs^5.ϊ(_/j5,UV)fЙ'&Eوj^eߏrzu> /.x~{{YQ|ۙ.~ ,iÛat_?/z[eOʅo'j™t22io^}4x3ݚY" n>zLV ^N?CȮpp8q=C;^O&Z3Τ+}ۯFUtJM/3( ?Mu26Mv(+b4z9׾Ff"~~mw7d_ ۴+^_Tosľ^L^az&ŧEϾIC2%fzX}ifA@Ar6O9%<`|OG}H)L`\n %XaKO];ͥh^E1/S}6W/X_C<3[hdtDMT^B(U?zeYL><aʇMhCy7l ͥ/"z4|Hћ͐:Iُ"OZ,P&^*}apZb m1Eg^o`Kw1sN~bp{$܇_ \ [l>@Rgh(zFod&ȼLaBBc9(V =@*'2QH9_C,ZW`\"tYEpV '++ty<7x٤kA7Sqt`DC8ajm^Dq[rø3Wf e-pڒwZ`e/аS^ B9ECg)Vc$ ]w9g[\Wը"K7fψFs%5.V3vu!צXgl&WyF?[ mo.z▆EIC#nbS5/ز4w:m{,n}QQ 0iѲ [Y\i-~gYk%CFx-ALME*ec g~fN#4U~E Һ̴̘*mMAi4=8J+4CB*7_(F7JQ(]'A/-a( QBJ JDh`~Q\,9 EX 21 $@c97'_GEMjjL{?)q~X<ѬbPќo!RozxL7=Mak$Sybb 2*x bT)aB0X-`o."/'ݔ.i"x]^,K^^+Kz]뒅tdX(D@mH) @AB@~Z$ǂb{]~]2ܺrs965B+Eɏ_栏`>YYvogV:s]nOL_'*qNFRxPJ J=ǽaMXP&AhiCBɴhLXLI!q#e/[lXLk%fh[JrЪZP6j@Zf&CHYA4-55բT2{Qݩ?[^"ƞOҽ:O޻:\ Es; SS|y]*mi7y}@vJyko^Whu-BJC x(i .2su&;9A\ P 5\&c;SZ8$kB:ב\BG9:ב\DG6o&i@R1AI@ DAi|hL#߫#Б]ĥ̉nZ0'!,"}( XDJ ̳]iPre>V˭} BR6pd)(͓.޺ݾYIjC7/}|/10RAChc>16mX]sRD|@bu1c=/B<: %H'$dT"H =B/w)lck*p9uWy-e<&~6Aol|fө^w,N}՞NaK4au,m%o{gqs5sJJsI*ZkNr%5=伇KO{y9!kvR"x`oH@&2Xb̄zZq|wOTrr*,bb*m|=dApCd i6DDCbw*"il|j6f&[i(J`Vl+C?FAAؗy8rfl>`f@LnkG7U5Uh;DW+)a5m:LN&Ì_+*FqGPGq ӽQ!*`c zS(Wfo{QuEJcȞ"/&sCzN&\AG^E O $` 81*1R"&F;G^ZPU /O=yӪ9Û{a?x8HH>AȹjGZFh%'Q`crH"<'$*&D;KGՂ*IGKr 9hH 8߀߼:LgX}i)HA0&({oGujB$:0d@8 PP$a>eZ/j/qhA!YB͐"$WTKd0CaL(‰H.aXA?ToQ$H`G"A~qj1\ W c̾+:ˑ ͑ A>GPEEQ&B#NF*Bm@F67EQIX57C%vC,悫L%ۮØX qUH.Ø(̍*VPDa%\`h߸^]W^S79eJd[ ,L[0Q4 aPa Ӊ6d,X:YZ*2#IB;kA0^c|4mJ"Lj"Y*VŖ):bfF|4|M#N ${ g^Xx+5g-OÔQԤV(1g !qzLsW:QB#AKK5P0a=3]Š2 F) YM(HPe5ĚRL҄ʏ~F09Ky*L^M7x(62j0Oř 4NRf!4E:peE4M ZVY4` 4 :,qJ!#r1Y87P,9׼P'ezY,}d+|(3!xqZ[t:9WS}0&:ƽ+.=EfЙqg, vlV҇i44[2J;g&%|奧(VBpdZ薚G6pʇ.Ly3 p>\kAy!b"^fl˟ā9|(Hq5C$#wk⁋s(o@#~ǨAD$`dc-?\td:˵y xS#ɾ;gf:{w?3ʹa8ƖT>ݹp0A]7~,撏mp `!+\m lqumm뤸m][go=*/"1W0,?!O&/F#yK%lPMzZDB=QbȞ95I@ADg(Z+V^qM,ki[zz7w1~.m3ҵ ^/G?/9~+xN/pGļK9Rs9-j0e $hj:cRTVtɞE5]3G7.eϝq2Q^?\y ApgD/r7~ㄩJ(~6b5mrDx)=)N`0Z=PA UFPtĖP)X=BU&pBm6oK¥h|i\%?uyn4%ѿ*cM'k-/ldvǷkUcSh+Y~|i~lVlu&.KX_{F\s8m-Eòsr5]0ݧfh(yNEW(N.l |@\eIf§{\k/z!~Vϲf&>R)3j/9v&) $b-9KR 3<Wwo.M&AZ8:*άʋ@G`U/j`1k]>,|Iن< LqY0tb\Ncʠzڀrs[RǸNnoⶀg=]&/_8EV0h?O#,Sd%й0|}:%ib d#G.Ugv'v8ߖaɍ:-0u'u~k .+ABA<x]-#2l^"Z⑞Ô+bXr #>ED--^EEJ4+TXvDWV.E ?]zx$0lW#~|7FN1+ \FWgfBW2He?oiPDWXDo3|W? 5ivxv3.:S$D)yBzg'I؆F!oN*a-$h!aHwRo_XW]usOЏGlZŞJҌIipdJ"@NԼ%KQ / BuS*LfI,Tp-AlYMC 訃::l8˜h8Di4-q>3zAZ!Zaa!=iPY"9%2 #L(-W?Ay|zeo_$JƎCȇD.Ն  JpH2MSCDiᓘm/[j7JVFʥbQnĥ8hp2}aQO=dmB)MR0JìBSMr3rd1m7J5;aV*%uXYP;O4#f܇16 zaP6ISB*lFHJsj,ňּ9΃ôh)ނv|Uљ݅v*[X9f DoIi $*C,dzO1GUnco z|DEJn$}뤥ؼSf95;v3԰Lo~lVO\`|'~Y._UyҘIuO5FA ( ,&\JsE ϸIރgX~@Yx}4I@-[|(r<|MO*R3u$xHd>M(b“ 믱H-_Vb/!Eߛa? ٴA mL$g`ʖɱꉼ_ yRKW-E [x^IʍGGq0X Q,x택 J^6wCj$iMǛ7烿ddŐ²Cc!^}ҟݛOSǤ h x oy^BxhO, Ťق1߸ظ0ڝ9vV.Fhdm=^>/q?fe8h{'ߙ>n=,*U LE֡B'7˾;_O7?.v1z_B&av/Xs0}$]))FK rP|~Z&P 2 Yȯ! DvB%|T:+K27*X54PZ`D@Ty` :z ǚBJDU`LXZe̎&pT[.  Q#&Vd 8v8xm7O޵qd׿2XЈb$ BbY_4zj QC)v9!iimL֭[S]u; 8o &uwMYr@H/sz 0CIâq-Pgƾqz OO6[4NO[|%n.J߯1"/%RVV-|)TʫȝdeiO0h{xm I;mh_xV?/y^G9`H+F9Ut j3_\+aWq-WήM!@6]}vue㎧FB+~fiyc8;0Yhuu}p?ooꟳzzm*ulǓ'y׶śzYzg׳pyl@\ʇ: >~wrC,jbN3B,mDܡ&H[o&"ҚJ[lE:+ccٙR< Rx v[˵>clJݖ/ۇ{ ?Z P|_W~ub$!'tWM'|wJ3$Y cޑ 0$ Q$ev1Mt.= {&?|w/޾~gvʥox^9b]5N^yÛ߭B0Ҏ}:[nm^sZ".~mZӪz0oZ1_m=Z~|~rrzm߀l@McM9o[d}ג۟p-KG]_9g?8w?m>,ӄGg%N=NLn|i}v碍O˞'MMx~"zAm~^JlO>H]?c\ /ptS}297`oi/X|pd3y_v!%feg%dWdMFn2r)N- !,jZlN,G82rیf6#mnc(+h^,㗓Ŀ'\mb\c@nLJ<rۿX;&n?/T}y}.}qQ+ uuAewq}ido}OB.nyr3\hE7XV;czIq_ݜB]uV9׃tGR^pB!͖<J˭c\p1 JoLF]a9ODʏ< ŷ}Ľ8g=Dj9b/N_{/|;g}w49_~w)oxWǟc&?k<;O}\$v?yj-%JFgITB3JpMVdl)I%e{kA}~˱$S !JNhl&%Ҿ*BesVItx9t ʡ஦qQp؉Ф Z*Bx wD4lVh{aн[3&uQ[,it>sU)B{`յmAW8 YB"nʢL Ф4Ƌ*)54*mt& r|Nnp)ּk94h`gfkk=qTv0ᎸJD͵,1KFK[!I,!Ec}Jw^1b6lzMFVcm $2GYۢE0ހ$Q 3;p1n9]($VM`r14&.6 &Jl5 kY>k%c fHcM`$R"$˿< T 6IeSq9%Bbc1;R>(0k 24.T+g*){uB$Jw9 $ac"R@Oy_TFdA@ JGAEs.@1x7 ~nVor3E ] * $(2R6V$ҋzI2 i, x佄FT㒵uKk^LKQD9FɉVĊ12Dfj%<0!`fElg<[>|/&8-#ryuj>0px ]0 G.7l:# ϻF%J# 9Ho,r78/XH脸 jITx$2D&rZ!d^1 &ՊL%> Bྤ"(Qkv3;76d n*f,TɂNU~d}%*y*Ɲ lJ癠j&@Dz*pIp͖Kɯ;J3`&"Xj5YҵD_UНA'e#KqO7F!t I,x4"(=D=;䍅""kTE[woAiI6cB8*gDH!cȡhlͥV?`bA= wV4Q8FJrd-f%RI"nƂZBTVP"t `?ow^KT`G_b8hm7֡[~>5,uI4nh qpq"thѣй19q!ۦi ENiV ƬirA#yHY-:p4xDh31i0e:;? I4#Bp0(!A/k CC*C5O|jS>oZ4t'6@9l)&:ʿ"z"0)2{bi PLt,Z:-ݭhwmYT[I !Zɸ&WÎS~ڌ)R!)) D *[I8qs}^ַpgef'|BciVgxǾKl ZrV8A=}S C({o+p,ҥGH1:%4Z!@!)OyRMF+H϶,O}Svݧ>eWICToUK>k C~iڂEcu˃V\͞Tg9u3/mZ%V+j>]D+M)tʕbW/}_ϳ8Mo=-;*V"0&: RXh*hf`8>AJ*H`ZE+I*Fz,&(RS-.mqkeIΤ<9 \0N;]cpp^}x[9ә\?ZǣO!utr D 9}I?MwB2nOeSyoSRn x!k=i~l{H64d-"43; L;9Bl CH+BY'ڱb)P!a@T` (I*a;1[7lSH0R;O`܂0@`ʏ*bu0Bqw'6D>vwǭlciȆ"aB:ъ?8?Ԃz=gꇜ36C@?Oკ!frN1rN*:ٖ Q ( |-%m!8:,esFZddZhўY*Oqxđ0) B ٓw$.%5mu64J @Rd!gBF2v)WYWYCb$ 9CrHEwyK DMZ92 ]X"-S-+=TR!ꌋ&D݄.":FEJ0Ywz?w:+W޽~!ѡUV(Jb0BbBTYCXwT{Z9b`ĞT"*={8wz*mu\ >D86NGN5B:zbw bTi.P(6rev mF*%s!a63AC[wL3h퇴1nchZ!ch^ #(vFF;"N靹)Z1z:>5Z*b!LhO {뤆&) qIoRĹ!PB^RUWZ^AyDe"wgN?AWU)˕JEw85|Rc-.*cc}1b"vF=L#j5)kQ!ʫ ʚZYUX#ŕqHExAk<<5ƒP救RD"z^{TZdt) Am E0nk|5N3}~#7t8hxy>ǣ.Ȍf|*-.Aw d;T5i%^%FtHĪv?(9kk#2`6kOk#"<5C FdHʳ&ث&؈%5jНhȚ9k_j@_Ne>+ugg;^<;V̧a&lJ]zutat޽:}ly(K%'eyY83D'i^W!Ԕx_ZH^Iq0֖w{C&z8af}< O3^GG *5<)TwyC(m`;|V_&Jݚz}D5IƇqo%8K_ i 3[ZAҭ!xpW@y9W݇ZSr,˝}k52*I{wŐ@쮢cNڷC\pI{F^J2Q9i=I$QBr~I{ ѼwP@@"':9iprY"qE"p]LE{[:CvV}7{ZNzy-xuvT>:N^_٤&D(&Q&=x6EMjDG:gP&MjDnӾMjAna& \o0ZQ-'B]0%0@]A JPDA&qv~ctj  [A!jCST8pr5KFD}0BG ֊Y1,D,`2`&YSSxV%BWMbG[Dy i0˰y]|3_ɬܲ*7ci{lQp1F$8銖qڋ"ej+/dnTK;p)ފ|(`{۠nGf g m]Zxd:`#(bD%SKAFVJ ں*_NLj|P5i}(tóm8!B/XH+BI%Vtb, c8_mv"JJVFrР$k-Ոrc%.:D[ZT/t҄uV!O<`(P<Tζ59{9A 0*iFhN"% d3%HZCY tPE/"tu&R!4#|T>*1 S  V(IA1X{]%kgxY V*%uXYP;O4#f(5ASQ [m^pQ>J jּE)c߯׭9:* 1k)@;D `Pio9L $Fz1Ug$bZ{ FqrDg)!h|ȧNcy-M{Y'+[yr׽܂L$}癱Ӑc17|Xg@F&bpS[6*ޯߟ߅9/B>JE%Hʈn\ M ?O?hOOF9LkԜgE랝bX 6s?[;S~~)]G_b \[DqOYpǓYns;.?q"ݡ'8Y·,IC\pw/OJ|2?=ɦ0N`AC,&ʜDDΰcBXi"#iSg6tr'i 'f:.%#g2,&ay#grH~_c8/*ygϓhCHgqj#2eطW Ke^efthlЧlЈA}͈y7xU(,?I*FHÊyE{:5:^EьyEK25ZEфpi+&mdRi9i03l4_-NKl7'ULؕh;Foܽh\S.ǘ0q^>_./܀ՄS"^PPcPjĸVm5ݬr0u\`U/'F4B& xZ-$ps3#[ ep19 1aRU4hi^O Q}Xve6*=>ayM-/r6zuիWGW鯓-4?,$0Q-OL eK0sC%9;`\Hcf8S]U]uNZfh99kjS餜QHWysG~~"l&VUAC:kUzeq!\M4VMէ_v:)[;/KvZݮǹԬ/R*B1lnXs o`dRSYI-s"(ČJK8#) "Ij"S,Ra%k1X6IӶUȀMQi{O?liSG*}ӗtaŤ@6Q,xq]e|>1xSST]m֪aZ'Zk4[pGC&0Z:zJ5JМվ_ 1BgJ$J52dTet0z~Ulҹ`0kC)Kbt٤L2:a"*Ķ917MZ1{WTgTUҰi4%qd57[@U3\"e+UACP1N4zdZ[*:+x) `aF A)HŨlZ2 EPAO[5\ S5!b%qHF#"XAѠGf<VG1jpD5"ՋI7 ~*իXm-:E.d2*xł vusxg] P 2JU;]NRX a 3`qHFbTy>L3<@Qa1gG jQhD` *A L!P :sLZ Vr ֐,~I B5qPQ"!\`#)#Ǎ K &,RJOU৔A CPq4t9k{3iQ@EX_K|`3/ H-!@H2%Fje":8(5S eH+A! 6Q.@pآHg8M2AYUA i/ᗪ*5Fd:cLQX*a:]9;Rx^]\a}-VޅHB_Q 3 Ƈ%G G\%O)ZJ֒E@+J ,݇;<#y0. _4~=H PI"LJȼ`>K1LRs rH&F@t#@܆mࡻ "Y)G7JK4U;L@ۊ' MW-*N k>}tZ=؊ѼȓETr!cKOl2f_/"&h ]21h/&9 HT*]_sl0%ӲmR"T2;{'.)v%)ձc[ D 5@W@X\ 5 K2&6!8X,768-'o AzEȚ=\\UƅE1pILMUQP}JPC#;#"Lj‚y8) 1릒}p-~I526ЉRq@k!Z\wiCX+jꔎi `dFh3+o B)JMX e hɄ7)@ª,kȀ Ă}O%B0՗\zj.<.l<_MOO tY/ q::8]2L1eg]ڝkD3 -nj` ] E{'UC1j'Pk*&"r4RVK ڮ `L DYco=) W{  P7@j\]?g}`P.?w N:)Jn 2P!P`AH $S8#K;=x! =:i^'îGcߏ  +A\&d1pGa~G?i7/0r_0ӢRrQp~7x PuJa-\ZڨlJ%ؑD19mpbmܬhH5i*2|H_3I.bpR{'kSV#O[6?䃨/UV5ZL!VPtժd+Kf@h D2. Lt(K㰞{hVPO,F\*$[0x p4 \ӻ{'Ö7s/^ {6獿v>=7f-=FRjexK9z6^e_Pf]=PW|m;uFYWݯ`n3KkBC?'ozK?__o 4j|9D]wzu#sϻ>G~;Yl^SfҬvUQ.Dgy+OG:[ K9ղ ,A ~协Wڅz=CP˜(!td_B ŽϚ$ܿ ӵ1Ii`Jqˇ_.156̻2> ~PhLGrrDϖGCTa0ߝq'o^dwrޱwrq8_OV?\/Oec')?w?p6~$Nv㺶s oYzHZ~7wV!#yoD~<>.B^֟m͵*h7'!.im83h Ќ`\JRyKa@d]祰~^ R&0ͼӥMCy)lKa#X7oRFy)l{sy)lWKaB5+ ~X[aڟݮwJ/QABv\n8v1Z ,GK9>P*] 7 ʨ2x5Wc\s KxNGzkuhm NEȇx_WK|Jo/LLV{#Snd=i>v=/6?o/V1^zclDo*Խ[S{ q1#T{ó=^WqwϏ99EdoK?u^U[_ if^r(pO|r7 w{_f}v[(NVuy`HLۻ= t:ec=H,@g׏4G]|Enˣ:恝;;r.=thx1d/yqr͑㳿*qeEUgoO؋[yuzba#+o^{IyC[;J볿HS4T/p{e䇍j7Qo}xU|i$CŒ5.6>ښ&k΢x|6(:qesd1X~?}`]A+Jn2Il9@n *srI9KڱQcwsW|~9)bUEbJUY3.)՚U& Sy/`;ǔjҦ޵qdٿBdPC^= c2N`g?`J`E=ZbKMy& HT=UU`1l-*y\9(8DZCsjԗRB)P@dDIƜ ϕ*\6˚y2xZ :TC\\=(Wf@L@W/EmKw+^Pg\U\-2W [+bX qL#a7L|~o=3|(ZN9 il >* kϬp =*' lW!;m+1<+ymn713 VTBbg"cm2Զmz SP7 "gke :\͵1@Lo0jL# e{Ag[$6n9B"i;pv#2$bqMq0"*IPw^x-+*#rHT5G%g2mΪ1($Ɠ&u-wcVQM#wPGTSp\m`MȀBvkaXfbBE@Ӵ0H*2!-1:&.uX{˅dϲ5PQ=+SNl >TUf J} #76Z,b&^c0g9U l- + dQaBS  l6/)D]jDydvU+UTXxꅈ %ŘtY)GDD̄eh W,.!H^W])4Ȁg:c# Ic[YbD^|ʣ9I`JTRXp5cyP}o3L ΰ3,:=+'W6+49ac[-kx*A|]UQ0|8bPQU82yKDAKJZ7AyA3 XmAԨƂBG1 Q$bDO W$2+Bi,b4nH*A}Q@VǃVx4[ -6cHp Vϓ#W14+JXm))𲶷:~c0 v=nX:d*,hb⮆XM"'G]rQ1@>!jUW@ @v3 hcBQhv)0ְcFZ8 9@Fg@!t9hs^u@ЈX%Y֨48>ptI RUiEHQLnoIMF|d)ebzc"Õ,-Ƃy7* s.}H^8,*VZXG'R% fcI;eb,o3z, f,yՠ6(*gUnԽ"Zz7*-Ia. f\M:[Psɺ6ZAnhV~GǓH^Y\ӘI(KT0um 覫2j&xѹa0`mӿMKQ"k]֐D[MsQS9- on0GN=7fܵ9p7 JxKtM:39P.#ds[kh+Gr).:PP<*P#F' \ X 7nkdXjlFUW/'+lE_h"\x>W+)oGu6!3 PhQ0RXjd'U]:@1Esү=!dnІEB=+NFp `Niu)FndH 5j*mF (mͤXH#R5%i\4 Z`ygAp^)SC[xYk}(U@`(p QX:dС 5VօX~)Q#*fT)5A=Clmb.pqF$@KKnL~cvN/_Ln7C?w{lŷ ˝#|G64XߖcNe[}>W[JlȉwʊK5ʦVI=OW6 v\5w2 ,Q|ty|=pƔ / I.e0$^sCR_E`P$eRB㊤ v*e0(r&OR $eK)!v%e0(Rj2ةpI`R Ν p RN8G۷HnХ[Nx=+ iܬY"lOv0R7'usR7Fv 9 + \͇9W;W!$%APvqAK v\ӒX&%^"'%;]+4- $8,Z Zݒ`MMK OG4. Jڳi#hg`=RTsOR-.w).w'.w&.I I I I I I I I I I I I I I I I I I I [Q{aaABiT-}e$_m+,vib_W \m?“ ϐ̷?|JǪ | UaarRc WoWsdv)0($R؝`HHU`GQ*KUA0|CH sZ¡!-]ja u#-Apl4AZ[Np7\v J90Hg.gggϐ ϐ ϐ ϐ ϐ ϐ ϐ ϐ ϐ ϐ ϐ ϐ ϐ ϐ ϐ ϐ ϐ ϐ |3ö=aABb[Ֆ|3}W֟1[kt[o0xSiֽ4y_y'dark6w}du|ino?'ӤK'f¯[SRҞoӕe3dE;]Oo3R5E^}g:SUzC]M Fb[CmKU\*H”n{`J8'!Κ~xbOȗYaI۸\tҀΓt`0C~[V<) ߎ֏(&Y+Ӽxv0}l~z0Nh;Ne 5lq™Zt^:_NgZޱ[⤬ ׇ?<? Хcl^۫?/^>uٓ:h}6Y=tBBW?zao1i.57쾴:>kbg㤟I\凿, ɧ}Ǖq =dδu7οzŷfe@!G˸vpWࢁ*9GD " D0a @]՞x\"}6$78 j);9m4Dx)SOڐ΂E횈f$?iUO?^qM5Ե2f:nBW\WMymojtr޶φ`Nae&8 b`dNPvf-f; t>g[G7:~~#J0r_.K7+貝m 0}Wa`k>Y?\E_ H /tNXA΁jwL:Ϸ usKZw^>%e+=˛tDz ?^qgj6KJz?[kcMAN>5z2jJ:=jbghyV3BtM MeʕkyMQHk"VX.KY7M{;iV7o3)PM %s9͢)qSfy; 1LL&7s#4=X*&2'|k,6@;x(zy n" i@ֲgZF0aOƸ$#D'O>|%|?Ʊ`ﶩ{swm0馱:MclAQb;im{6htKl+~-_npPZbq8wڇ?/刺,ºã8TioMxh\i%ooobL7.^dEN$9鹉5Gw(l*=8(uwOV~vYL g][O$ɱ+y9>̈J]m-퓅ʴq{~" hj.2dfl?#ho]4cvQ ?/"PU!}_zjֺ7 &~57Ч +oQD xd+olBQ!=>қ-[)NMvaUzr&$'Dz2fYm|u)\ϡ.Q86kBrAvl*;# ן͵w|t){$9q*\{ HڈzSai1X/}/߲a$hzu$e-80T& yNy|snT~lYnm JL2QP%rRy9! EMλ_WWoඬ ipHz}aPP*4f\ /u|psOkX5$c'3lDt ;!u<"$̨؄.w򌟾%n[bSSa0a[,/:aMb0cE!0I?~a*Pgk$[۳.eQ"ԾjdY@PjJZ6FRسk2 jg/q.W5$ HN#b.u-O.j~=3_Hm|a@H7&mlJ Zv Z@KFP)m\˗%_@N;Zv B{HhH= Ȉ∓R]( 8:E ъ@F:c/ܳs+µZn24֢1b{'JlQLvUOc6ΧLX62{JEy":JJ׍/.S(HFapEPĦ38(8EYv&t3s1$5c?cO4cxcOR+c̾A& ["ֵJ+ID Z%iΕ$ҊL$ͥDm$w@];JY2fOSY]4$2cD(]1P*eFֿ?A nO/?> :E2~<޼F«{Tf_o"C5oYWXͯy<|rҩ#mHX+ȼfTfWFݪta~0Cwܖ°!l6TouhpfxE?>{7o|quBһח`~>VJuUgQmWFkbVd[yob2LrHZy)fhzfӖbދ0gbBbҵ[bxrύ"ڶL'X#v'H ([LWрS0q8Z){䇤?WTxW4h'(8cǽ.Uãx'QwCd_Q10Z(є$)1(D0Y%H^L,?EI0}p$M$E1vV򎍔QNf+2<)9"y5iu>mD2 Ex*f)-," Y(!B>Ez짣`0CNwa}b\\Q/ $ѫWT'(^&Uh/|=Pȃ76#i0A mR@GJѷoxp5'.IIymE =:)'>`L ?;xSvX?ȄBJ!_=0#A) @ Vȭ2N8kjOKm^ хTބd7&ԣ6REN_j}O7D`T7fOz_dU]$ "!Gu&74=?G\Om,-.?݊wϋ_UnRW畚h4Ks曲H &&4WŗP^'kqWӾ)NR?G4dYcĻ߻Cy`v$hK^ % $TCfEw"y^ty"o٬'hݫa7#8n]].Oot%VoEeY:Zw;š}[=.VdyY N/daƕo0$ӿk91o^|'#.ND׷T]A:mo2IC&yzV%pfi~JFK^gpsE*DTd{^:#L k-LLYQ:킰y> 3xp9f4+5_]#?lpYꞾ[]hs:Ȟj}l͛f({XыX-:#Auߜ$+c pV=]v#!xOZZZZZoa%FA6 Kfh~_Gybs.B"sI#Ƙ%w-)y;ލZk{hd` mkTm4hFp߉A{Zu)7VeOVݱ1_ 44jD]OB 3oe]\2$c9eu ұ3Z);X'mJXP-գ8r!墑MiQU$}PIS(*؅FRcr#q `PtUMj'}ꪚt}54+K*&lTVi=Pg)H9zH'FXGlVvY;VgV dR}p[8ɗ n 1ֺJT@E@8F+YIQq67-Etvq"E/dP2gO&PmŏL6{ 6c0PաFFi:gˋLù__T^Ӫw?{ҳ}<,1#Hѻ$\qߡ/.NxA99Ď=`s\p7\˿R0Af<˽"j\{d]0-Ԫ1}Mmc$/B"Dp"ੀ t` ŪdFf%ȤGi81~uYHr_q RGޛ]zw^>qٻFrW}: 2oyɜy {nz4Gٖ,JjYdw-;Ԓ*ɺW}dK#(2DkfIe8 *r֤Ԕ}TK[R?|0+2<݇^)MPfv<9JĜD~V֛$N`z4oBRM7 h5y?y돃ȽWzo[=ZW|$}˛Ⱥ;fCO!wcS{)0! DPTjx!  NqBR9 _̾ض[jxWCM('D:\,?|M\̉'Rb$h\KIJgB&\݊HZbm">gqzF04z52gG.PDE/BIdfI٨pb jĩmN#}SԲF b.ƠJ:S>,`HS7cb'{yF*Pfv<1:sMPZ)~GEo1[;YUK_-\ vҸdMZCБ! "&\(:o`,/uRV?)HcW Fj d ۾:$blVDDFDM||MH;;U՝ F%pbc"Z *t_CD(zd)B\qa0%e Pɒ#¹fu߀CmrGJc餅D0z3w/v P)|I{3; s/щNޜNtiS4a8s FVR_*4oҷTE_9{G^tbZ;Cv9c"|OpZݛ/2I(ALFq[-6e)z ,ٰ1 SD"|^u HL,|ꝺbĬ='4ҁEQp [7unt(ӛ?<1](/"\;#ign8ͺ˻L zD]h^KOKGPrLkP L"ͷ=ʝehd^,M8_pFdP̺#aL.?١"" MR8?p'I+9gygU}qdp[C1[|ҝ)$ŵwd?5>R+tGY/}y"2ӠS\oWiw,U6ZZek}=ҔelBfT: ͂}@ $kw/lj Bl_Tݭ{\~ѵh{2ݹ*Ƣi}HA`BFxOر(լ7W"3,Ҏd|"//JYO' ћ/_ow'JvsiLJ8AWI,$ͤ&P@%V/ /m Bq@pPH*Y% `^"LVr ))Jp+2TiU5H\'g1kЛǿՙ`/BxPs<"V]F)P+zQ.͔/(*>T^1nmsFp4 ,H }_M "4!:$mn|M@-+CdЄwt5KޚTo"dm$;n0|ی\ztAGڬk _ D,:$zDx@36Cn61s{YnSx#@&㹳w>@1(-, $@uękap9vrOCB%y/A3]vrӛ?|?#LlѳSwId BDV~& g0h>0$9\+ꙩRDs\%i;waHL<6AS3qϩ fXTW^pLCwi *jwm99Ȅ. _23v 2NǏ)0-+ZWڝ ʖnKP\* (F=rDI-+3=v1,6>-w>KľM=$:9$(#F̆Dň0ݱ8^ڼL'tϣ3zS>W̹ t'j1+2؎-yluRV>$ 2\JbLr/6%pfGwHA8dTx A$k?FҔ%/ŷ?泏%)Pq!5 _kԒ*ΆCA2(3Hr*Y\rFkɔvo@oWRZsJ@0Wp}u" $}d\w¤(+L= XvwԒAVe%4BeIBe+?AjSq (H gpACz@`.>w(yn24^FV؛0TugSš$D.5]݉#1Qxӈ$#n#wm׻)w(q 0M̦D8# dʳo ۝  9Nv_OXg֝ʕv?c2$$$~nΠpZKt&H hČf9(M̉'%Sn/t摝0$f@# ULeuDKeH*K*˔WJ5QI4t.E<-]s۟?>-Rz#ٔHMk_<́&;aq?|@ZyEq,:2J3!OX_MvgU\Or.s _[g9:@K"ђ B0=S"NꌥU51j{`|19MNn(i8YFn/8į~Sq4@rkۄ&\95bROY9ԭ ~p Vwta*JgLP/%He?r&7?>uOJol} ]F=U{?wެSuՕNyy46_ru[/g?6 \LS]J %bScQ(E/Q!+ ,h%4ږ:"wգse>ōYY'J ;Iƞˬ)d˺a@eɝL?cPg:]RvbDuLU׵`RPcdyGgJgH4rwC\=Is_+z9D3?=M92B}}yɃ;eRPs#MBݛ?fpc[vy&'WfqٛItO a w*x8}E\.oH 4ev젗d r$vD7EgıU(*>< / :,lZv%$[iANa [rr{B$z-b:asCL{𰝶(>T1JZ[Z0e+IJ:8W $úƸ{JAHX([ +JÝ /5րJ  , 8$-F ^1ɋ1oQ<= 4ҕ jsBF=9cZ ~XsӝBx؊;!?sv~~s$z$Ǘa1AQ tlx^#OHkmpI3C9o&}wljQL[Zo$@=VphU߻ŝ*P qbY[,K*)eMv_"J)ů*\2B@8!U%8QHII+AGׁ:r5:sNTXQK ydm ;lAe|l~06fy__S&GD&’<=O.9w@KP9OC0.pGmx\(ƅ݆&< .Rr>_L~ث招Yw"/waڳ>hV~ǫSLm)6p8(qKF 'VEx|OMw0]>s" *,ѱYرiiQ#&b ]MsFt& ahwř|YGg\t-mK5d;MRabv[V=#b4A~lΚX<'D)3ܐ c9׎(fw 5NZ8u8O&2Lr = yOBّL 2<9/ /I>(BͭJ]RSU@" B*dTWVJ붚͆9d!0^iJKwRRکEMYb}aDZnUVPc+PQCz1Ȁp}V'mI;@>Kś;x:JtEs߷eR")lj8"EUu읟9dq["aɕ˼h vcm9dXgKzTz]>$rzw;9۬Q:յ׸n{[hU%!<GcyHO)ڣ׊{P$Ⓘ%OXS40a_QPj"biQ6w, ՠQB D7{i衳pp@ppSK'neVޮ2lvǩq05(:?c} xP=V8kƱ1>'6n?.N'6\ĔcP@-siOSSj)$W/+̖UɈZ GsTԇ r>҅8cǶ+ At꯾{bO24GWk8̦ A;40;zjwg@ŘhW)ht'^e/)w%^bU+|u&J2Pp#܊}t#xr!>;}f(>/Poq^*_yKR){uf?LJ,Um^1ޟ9$Y"mA-֊ 72C>K_SZnja7ul@pJQ F|яL>k-rX76xgjf'Fwb^:!7׃ c,j]Ɔ;Q[70?p:gZPAp)͕R]dΠ'OX;(H| șhODȨ8ݻFT~ ^i1x,=0JPpɍCLr9~ۄ|t6T)(SAe P=Zj͑Q>sGm8K|:~Ԍ)L!%BXR>2l1YU}u@/8oSt$kڹЮGU\(Șɹ[>VW`S~oN5V)5?h=gek0}خ\~'~Zo2ȼSg&}2*i\wSBEJc)D -j4l33lKJlFC cʀma@C?Wn/}[ږJhJRg\Ur0&rZЇVп n)#Dt>FT mkN)RKX{lTqЊ@ uKRCe Նn̓ȆQΜȿXmΡ窕`$ V˜kHµf_ d@;s+lrϜScm 64̓ h&M>ZZiܥ%,C@;.[.i"ŕaRجGVVB&AKKZ?0d3(S-4T,~pmidb?dM Ou.[n>,2v @Wژ\~~5-qzke7دmñ~]X6N=5ޯ5ӂoFjnzDlԡI \9\&o+2}7Ŷ JX l =]S.Pv;Yo᝭ey;!6,wxl}n[ej؊-z(Ng̥w嵋{};'OAǫη?]!(혨=sIinoKZ#ntݓK$,wK`燡mcgOxZؒ8r d!t;ɝĺ-gnݡVDiV?MqYv7{䒨ʩ{jA9Vm_2 ?Ah࢔īIyQ$|I?i: <ǔR~`˹hyJ;'w6}󮁋ľ/AA@& w {{f⾉?5k 7…qV҇8܎}:FQWGEvBھ;wÌcn~6c>wћtJYyW?'$ß;jEY]`'+,LR_ڒä$eBVR}7z:ޓ [R.p%n|@pJJr#szba8Q% 7 ηۏ,ûkA "_ɼb9عJb%X;^k y \ޗ`O%=8~ r"a1Y/aik@^Nc\[wK퍣hVOK^A^33\1|xqu_]O'ū vy-u"Dui韝gP8JΡ=gJ0TftD,{dRsxvY{rB 9}]MOA$]3):Tu̩PU{eȊi4G21(4!!UFq*(NUĘ(O7`$4* K'~q[O\5 lX\Ե"&:佮tڿDcR,e) e  3H'*F.1v[].;cGt*BHLD%2AqYF<6a,%Qx-4QZ3չՊ:oA9\hͺ@Zhhws:΋$*wtimD:F0^1q#C,a`14 S#cL'$V3(D;Z|M-;wk VV.N\Jv搻qmjL/wf*9cЛBZ27bl/G 1RQ^'2ё~8+Gv`C_T?:XԚCͤ9NB2օxN0PJ4h6 Æ#^@$>v[^AzgJP5` =Y-D'P~vEA*#Ρgn0-;?q+9VZțΑwˏ!GNz%T 8_ fʹy Ȑ{Rib KPQ@xyizN]^XzR>|T^tG/'ޕdz=Z~izUQXYOuE=gCh9\\+xVQrOhRꙟU-zb U@uO|wLSrz TJvbjc7˩^]_?:IN[$Ɉ5z#o_ߋ\vSmޏHL?c[p/!8DQEcJtgesϸvyi 4^6ПY+mHE> !g`7A& M6$3[M$j12lYZٚ'8>K8OvoƯ;ޘEܨ}]Krv8Yi$4c&#H)#9gD"SAmepf"%atlJjc2d%(&q&X, a4QN1$gXjfrQr`Ma^lUBuV򻦈6C{(҈~Om Zb*.mTiJAZp*qj$w+ž]99u_+>N'#kg2l:n9|U?G9[0l3iFC}z6RÑ)"1JbDYMDI";: WW4\,R8vx&iGR a|Es(aM(~{//'KծkyLEjgd&Kd*pQTIJ)4A Qup*< UÄnef<}(XZ- f|a%ļ8}(221K Q0.\ܯ?{b 9La9{J8f[<͆R$FfΦrqE/a ~P:TA{& /\L|mAJcLF) C+Tư/I :5;c҈aƠR&*ܮuW.*_' c팤 vTL@k,f։+v;#_#`65m47rhL%Z]}2xl;ڮx^8έ.VzQ +iƇɺteo.'pWF5*J=MIsabk l34$YQYE_b4񲸫d. k+iRP9@$rQQ,b \7n4` ɀ T]+͖G+)x <50!tLR))%B0Kp"5 "h٪AZ]w}O Ymd+`oDQ-;F@1QXVi#}GW¨I즠V}1UDTokueNæpЧ vw՟Z:SSzvz_^씔NX4YbE5Hn?L E'ӧq ҪR& *<.{olro|(Vn OfZnIҏnTfeor6\cnOs[1roB`%(X@򕋨L PEQ0xA&A22iI)"ŌaBLL8JM4L1W.R0OO>=T=E|{=e-V}o.Z<="= рmj]W? f/?LxlՋ2kVm1('d!d%Wt,Xc4@"u?G:ҸNHE;r_!^Tz<'.)-ԡFuЧwN'EH)\hcK+Ua]zKرișw߰Pr.-%oܐܙOZ,0s{V`yL"H^Dj._jP)$< u(SX*<_pjE\Ε9a:.( ? ש2X Rko/tRI[@I^`H*ړD'ˣi+E?f Uux1Y@*WɖR\ IR A!nbV=sFTK/ߝ-Fw9qDmD^EqgPf݁I6n7N0eC}خީZ>0͇j9\CHLiB_{*bw*K\"j^NJ>hk6 ~tt s4D JrêUYĿiu;D:s$iRLR 6:(EMgDhLr"XdJ`G KVsLH׌-*~4F 4 9'VHoĄ24!2EIRPqD2R:XIi&"1KX3E$1mf'KiѽkեzhnONk.(-4l^~\:O%6l|7ȧI-6;9?q{Ju z KO+B۬rT JlLc- ZN&2K`ެxjYtƪED1HQ  X;q/蹰4̓a|@]ҞZjrKnC5i-QwTJo6w.EL%HhzW<.αi8i;Zղ?ׄprM~Zw4%T6+lf},p]1 0M*4]\/!q'/C'GTwFa<iUC ddChh")`H[dPvnkZ.ֻ{XSłbU? ÂM/1ATwI(vz'>Yޏҋ\r|qɳRb L)*:+/xQt.Kip'cRa~;- 1>-"^(-u;64LOvQJ4j5uQ W/MWm$|G6ͺ(eK'(XOLx19vboț2dO# "L~dbwOhpsaou7h\q銇?>M&0.jm vd׃Ǜ1mI w3WLxМ1K݅U `T ]Tcҵb )rOn+^[6Vsj9%gP^JmWٝJ.5'b+vz!b7>ܥzp/~{rZcyǽdt#v?*au*_;?B?G'酖"oheBU|=NżI)9zѝCXp%,RKK65@3svf ГT]mF,OzdN'mrviZQs6\?n؋=ly+dt׆ܲ4JS%S5Ga4G݊ԐI[w+ipnEXTCxhQB{־rcws@۽}Ժs^Ziʯ[7n9;Gz#J99$ϑ6M$6&[\ L?Fijn 8/#n^Em԰ Z [$/rӲvKӹUé ~.a5 5~OOt0.vNK7>^\wW(<;GrKе}x:5#PeGmVv]_8oN[ȁܥݗî7 \HO!Jz0EBQ EKmݼ櫖Dju:YRTt/\w::L'ۙ:9|"#S>'!֡EjZp|~Ӣ8@yTa tuȷ_U(U><VZeȮ&4ƽ+NFP \s5]PY&$sĻ7zCC.>֌4Y܍m䝀E3p8,d;hu4\&0խŋhNm${~"_ċ.-et9 d9t2,Ip2|aIKlq@Us!Q!+kF/٢pЃC7&vvl8(Tc-ɲ}QM((-X@Hd&2=WLfރE lU7LA-\4'#[}s9nC#:z{ク7 Tac/JHz.@c^!J)x$O_r߷u?7 ?8 B+!`PabNetB5!#&\HR@Cϫp$  ReX!gqR )Zxn3dL+9T28O0h5Odmr9ڌZW?'I&A.z'$]rA\rTb ,HCF^$Ⓤ6^X !-i} d2X"]GM'̎j`G=s Y ۣv.؎^գ}y Ff_u#GӱaCI:t)6a?z{˷@0QOWP Fu:L:_}V+~ӥϷU+`VHEbݏc[zʅm?۳JߞUҷgM}y^ί JKd$ŜF5;<&4sVԼ1y z9{[umu~rYXlz ݾsĪ9ЩY}*X" ;fokT%WWr gT>EkA7+ 2bJ*9$V}ZB$X4s<modVꤺ)Ssssi`d<2 BOtUZ(ɜwqXTi/$گOT^+ t)CE/PC/IA+Ij 9jYUr*_PVaM[3qKevW;ߢon?i67Ǐ_oaQ;eowf~߹p//d=x7ɩ1$?Y$?~djAn%7$e)H ߇_x7HĊc/H S.@|}/7^ʉͭ)M|nH!nmG6ע߳8 ϼwUsJ3?`Np% Й)hṦ +k azK=߬_C;i;>^xQj S!5%:q@Tcʀ3MC d&ǚWhAWX7N#U̅X@ l@!2уxՄ;)ipye{;S!|HխgVgsً՗̗/Bdh/'}a3Kag./اOX2ƤYNf꼶-6,{o0 j}Z2Ql?Y|sp@&9!1][^m87_^try '5[LAKr(F+fiEӃxx!zǕR)Q_X>78aEz/_ui{Qqqo~sUĀj+ wdR<1TZ`TF*+ 27p夥)AX:$)Dh$ZHo˵1.׊doT9_6Qq 啷΄N)Qk.a.Q`8RU bsf9sʃO͐ ǩp<̶8AǠcw񪼷 ;r|582~zpQ><\Dkn&#áWd"?| F=\>ԓ1jqkWul-"p(-`x 4Y "\#U([xt(p%2͌H5eJVmlbڿ@=2nn =W 1Y$Q4Hu!TLh$KaKD Q\+8};O&tՃBH?#"B^=BM]$Zi+ex¬ R?1_ʙl)[W?2|D-EÝL7mݬcK9Y)2 FK^hZ_^1?-zdš"yIԬ1tJጟ ,6ފ8!ۉ, I9wzx ?a֘0q^3H~:(,tJ`Ŏ ǰJFi*K5%KYS8hmOWuQrлsM0\;su9IUdyWZ}Wm޿g}qãMd}Q8yEyf5UOwy*vfvfa۫7SʿMӟ&iMښM:3Uu]& y"$SSICv#jTN1mDO[2- -AcQۈ^DU>JhABT`?$ ogSΛZϱJ^/jy{<8N#+b^ 9%،BdDf&>!CސN<1k$>~_ K8T^ ƑB2' tV3رrEO_NH -VA# gJf F@jD.9W.\"٬ gɠI9Z z4ෛA吝FS6*@ k!sgbLDUM[Ƙ0ˡ2:JV#*j̐KŽAP9JvL``;$ӃiJ¦Ҫ9! {=RÙSwЩx_Q&굣ݟ@3 HH>Iv:[XX aٗ{vz˽Lf_I 'p A< e9l\pg fcWZ`tZaB@1O'YCp C:vZ ehFJQPa ,+)- TΣaq0jd! f,BkZ%yyF*)K iS:8@U" Xƅ9SU!of-4ayi8FT2hB8 M !\DSdJb4=zOI[*BD'u61mݒ n9$䙋hLѮ%1ZMP0ʃI#FEH9`F2-Y'r1 ~*-q]$y2N%_2.)TbP0oկP055l}Z(ssR~%]g1Ӹv#Zo#JGl6Y hoc~y^Tkhڻ{/B~mmȯ6oK._ lO?_0&~ʌFǎ52Sf*8{1&Frm/ܢ|VzOڳk_ojlMRmíIqʥ#\8Dj" Jw;NwO EڵZZA{ppf0D4ziߊC5C x3YQG42GE ]1N\)EB-_mXaX͚ :d\ϫp8-cL)9aOAbqL8 #D^{eD,*' HڷXq5FudhHGܦ KxWAO1.$ CF#P8e@ dt74FdHҬ7GoX"cX~B/( J~W95"f ͛ܟh*aJe4.M3 7 3ZZlR&hiT Q*_8Lҽ S yr۔Y{+= S̩XkD 2+29?/c ^zJPR0̥H<q<0<Ţb0㵋#W\\|h NykUnr3xo-ć%L|n X{EsYEOz$%ןQOG!z؈uӼPG zWWeTHɋݔYӭʆAf}Ц=sթ^TLKȺL]/a֭FJfڝKBB[jk+_6eӡ%^\^]24>sn}h!JtO%Aw]xKz-c<}$" * @R^j2Aƌay 9bYN焞 r8:TMڔ\ Z^8Aصw!A]<螑Ϊۡ>p! e爭$j\扪S@mP =x$ o#lrJd_P TG;t~r::a 1Ux{B27!?z48}gW?_nyt`9@(m/@wl1B㫲gkh͌ѵ ٜb@Ts #1Ù9:C?I%yT)C@|u2A*iÉp']*1%`~12;5_a+ZjojjEP7iH[7LZYҦ3{ S~gONndQQ[e$] 6UO{?,B(ґ||4Aebe(T[ð 4Wޜ;:pY̷,<.:vx3_**Q  ё"Õd4E$t%%|=e悒(K?iЌJIj1a׊!tyDfr"|p`'ذ|O7}SgFK.p྾E@ [Eڴ"p2rw N#L)-tҨ@ʘUȗg@4G4VLcO4\D[VlN^|Le0uAY0(gIeUKv)y6m e8C:9:8`Dl:9p9|\Zwq"!?tD4c&"CZ"fžƂ> r2|xśxwz0f8SݻmT;nق*sex3w(kpR.RFk0(2fQ)nSlqo)]UaHmd8S;ZrlS md-a@T)%Z騔ZC*Y^n>=x-ubiYga>$oZy5Tn0H 3sbO72;kp7rb JPqq܎[8R}Gg&LfwurQW+[)sjb'Y(*,yzNrJZ8H5\<1عg]= g]*ZJ5؅O"{+PbWibtsaH(¶&%z3;)A |4ۡ߻M.l'}rlWxIhn\0, p 77^ Grξyz{:7׌珩b0Tf`O7ڶW*bCb;7'B,iRjڿ[! *D+h- l6kh[FPPbaZ 4EE-wtDө 4(2x:62n&"!2:d|`r_w-e)]N>?y\^_No@ S\9"D$9f%p}.S F>cU(?u]fDyqX0R(|&B 逪Hegt}R4+~y] B˞ś\qQAmb8VF}V}Jo=B4J̤ W 0kw(S4X3X QA3H),f`)6"tcqJ(kE@EQq9' x\i[TaĉK9_ٓ9`41< L$}9˜g<[(|:?cd?,XkGvG!ϮGAGXKۦGe'3S>LTYU<'O' *{B 2<۝nr˸@S饧pmVfƩѩ\j+өr\^hsE0اhNMg; ޒ57򲺙EL1]qӧ< Y~VX̓x'uO̵S66f 3“$IBVL6j5֫8YK:j%RT}r ə;ffffб_;3sEc3sU m<&εE0;뗷yHR9-7f^Uzi[F'\ɒVCj ({`OD Tԧ/c熓hR"n^ M3X羚~0iՀ0֫gP,j5 Ɠ8lC[d¸v[5F:2cןxz/L>fQQƒ% _Tzn qBiRS`:T(KݩmY ֨Á~Pdg@gawj%p-gc*S*}aSF4 Fp0<=C64kh̍櫁ƳT6zr h jD>J$FP3fSgW] gUZ8;@nӲ@NC`ȹAJ0j>6/߰Phh'VPĶ)5<:Qh&{x|K@JS[E rdmnk(usk+IR|5j ?U;w Hy} 9KcU37MU\bNY&=(: 4n,E2w ^× x WFT!'e Ϝ\>v5 +"@>lH;E_ꇖhzhױsq_:T7Y{pxEvעC#-Z?1s3 |jUֲZ`~G5 UTTtZ)Q@.-"(. )Qv lV'5m-pngHQm,%& {-> !Ә:l.QTGע.;@#]ߥ8Hmwۙ3/,XT(P9r0+ z!:? g6Z$+:3 8a539 -*ztmhލFc?h>1ap'ݞ]㿞sfs޿͕~圫g߇ϣ c^MG䞒b> ffDJPFa~dU*d&=fEz _ڤzg5D {X3AȽ4Nt<d@t?CtЛwܶo ov;fǹP?ha6}|ZB8@=K ȋIՃ+q~& I7|lzޥ%b:'о?^p5hUV?fܢޕ>4˃eΕ3ڮX5KNvFS;'-otxVgd#r˦/-2%\..@sSӚG#^d6J0xFy+ c+t0ΐGxaDP>&P`CA/`YN{֠ͧ^Fz2$$2[9Z8(>A7q>~|{M a{݁v!)s-QуYQH)Hj}ɪ/"#HЪL^ yJ9>|ػ'3 B3iqpʣU&Ah8""ZgZRA*=5aq~be0YV: mզk9_45vn]LLk{u>10LVy4l2!EW+YiaD.&+Jxz#lT.!$b|aY4zb"A\ 65a#q|EQL2^Φ ſf'HL8+&iŐSJR*@MU|JH :+Fp@I0 qJ #HFM ! Tb6JŤ+_%Øt<"FxBP 6V@lŘmCTt{ $r8h)slD8G hK녓XD%՟ N@f'c_"5+Ɠ$H RskP WxũP Qi`kOI#d*gkit'Ĭr4̪?!| oc>=/(glZӝp`6%>zZ闙KW 9JK9awv\U TtB1gW$'6Hf,imeP Xldƶuʘlx,鍸Tac\wZrNb>u\Qf#XO.(ʹigs'+y\x )?oBff%{F-xl[O/_ _& 5zV3Og'`w>d &dPZdY,b0KHF) .뙜EÇԣb8xYƣ QNbsF~A%F529Roe [(YCycؿlnU0 3,4!#ùxNBha=C7!LKh*XF"{]++}Ii^A&Iv 1/5x 7`U%we~5+嵆"AytHFx2Mw1mijfv2`?(xu_V/_M$|&u`͵g<%7Oqo'I! C*9,JC N9a2OPs%m:_Y-(}ɇ!6<O'?@On`S5bkre:˜[rF&_ߙ Nkܞ6J$z|J66)aE+UNRx^{8;},j?8(YX9t"'zQ{u9ۍ0s&E0 nd 8Vq y3 /dgQwf5INwYiH,v ,].Jdj4} 5x7z|-r9zt.Ӗw;ntywH!㙌D\䛌9j;MƜg4LƼ%&#@8 yEY$ҳyUFF4M4kx)]39ǟ:tÁ9o7`^ @.gDZugDvQSǎ<~Y,.مB]y͆L9A=j%ĭaQW6@hgIdԌРV U""UFAݫ^] #DZ9-R"Y'9w[q=4z7n+>ԺcM|0zLLU >5(8o%ԺiX7'DҵFkP@z0O\M5$<͎I}QG+Nz 9I#Bz,Hd8fc#J!G"vBj!k\9ުZqGŒY+=^j JifYdH2 ) X$4V, Yk҃5|iH (vGl-WCFþBK/#wN8C#u֜P%A#C ~DBhw7KX- A5( *ھ5lt;GoΜ&k༺Q M$ ',XEJD AQ(z930р~*Te.hi;2D%Y9gdQ8l`R.a48HjqyZѡ=QމCT #5uRFp4ڃG5Gɓ>`ZL9ﵐm {P O7߳JFG`"E؃$2F[tP6])ق BU]+6eN#}@xFQS 2y"t( _0kf]m%jG5/ϏN"ܸWxQZ|FO2q wH=S0-yƀAGMOL T|$Ne}qW B/_-9 Ƨ [4͗'9> H)ݏe?y„)] MZk[[L+j>k,ʭǻ߷2Eadb#X0_;|9H߬WԅR S=ၤzc\` Z $̗ )`| Rj牓1}=&8& vjs6 Vy]+btoyCLe *:p5*Gbw59#xFY]C]NɎ0ma}YʤfHL ?YGɌG }$tC !8L,@4P^{!mbR0Ez!CbTHқϚfO;_W1((6reXo\V2hx f-5f=tMY1c_`ptE1p quH ϸYSzϊ͛B[~qgpOor2O?YܦwRW[Y,nʲKo6JptN$5o3~CXLxvZo5,W.wI<O/Uq#K%a K4:K\%mU"%-/ƍ X22 kX8KZ^*b %By!SF'b [x+嵿Kօw8#K56X9saDd/#a&S@RMS@2uO?Fr O?F uO?F O?F6..*Iy5la:\)1TJQ~ͯIXÒF{mQ^m5*/{35eLMmR*f늸y#BG;thy>p2dtK7O'tBBil-7bYS@¨(wbשL2Yy|n6əM}[>xqf,qFPN$,Fb.T?jZˠ p܏D=ڔ^?Þx}3?vygD{x8#iy}C6rpy]qD3zGKp/0Hr^3/'gOW'@Y|zu⋙8W~"_\x"=zy'BƸo>'ОK!>D|z!^ >D|z %O1* o9I3K,5~[ՠe>l7!I*T9 Y&zԪ(( %d`\9G˽Ȅ>ލXSIvzmx}+T|>K˂ǥ`U!)BxU%"M8:#<Gȑ"LdIG"\*#:")m 2׍_C(%\Sr ը9 M_y]7ӄ0;}۔c߾(MܥGK6j`t3OZ,`B@*A ö(dJ$*0p P <>m%s;|bdK]>IYͨtɍup2;Ka%)򹢅~rG[ `+ r/ q:pY r#EmņArYd!"J6Ĝs fXđG%}RG&GDBRo @G`9F ĜA H,.s1^Y:md4yRYB9iLԴ{A8U$Nˀ#-S6FJn8˾( NjdćWћ(A 箛mDןΎO HJ`6F3H R.r_foL*T+U&ĕW&x')\mxǵ ƆGVHbO Ũ>gӻbg)EΊrֳVkI޵€ЙS_Iո{!y-((:Z\ +-?x"fj <~j6 JK3Ud0L _6US m`1zLpģiL kk}nH.`J$_,߇aC +Y\!, >z~ӭ1/ )K %)S^N]4"*)oy#bD}8D1h)Q™l &Pxb#<_8s~; h;P>ig z`ҥ{qL샱LTyag*n˞b/-/ \9_0$E͙GɳO'g4)bЩ3z º™9 a]紞mC-(ɑ_XtwZqίc u4qhdXև7cmLG2H|~5m߶LA^#.w1Vsx6{H]uY1K%yQNZe .A6-d&-*ZP\$&A#G02[x2g5tE% @2l/ϔ&QQ1) sZoSȞ:ׂ@J6rD;U6vSÅVJ{=Xe{)#V8͸E ƌ!C"KU;2hՍ oQ%f7EaH %g yeu.\F9LY 3ȸxp- #%ECy~9,C#=Yڛ |wOkhL|c]G:XEhL QBf`@Q#+G6C\C"P:`q4% ~|>5jz4}e j0`\c5/ <Y^iIFˏd sȊ!A 6#$Q!p0B}0sAR"4(\2Q)"=a[jAH/eٶ^! 0$Gcȶ4`d~|w+ " C(9#m$a}^4x`S,Xܪ &Ũ/@Q*hK+ yS˫&rF0:$Y;`Wƛ6L*r` kAL{d#Q9{ lY?e1S~OļG~6DaU,q &Uf'q,w 0(`ի~@MS]54Rȱ}&E9IH]ZLְ# RGiq%&>C)Zp0zP_@13otc ʔ|?]48=WCLuW&@k器5, :L@xq㰈AB8 Ldk#do~G!XȘQb@X' EƔJ.K ڔc5A0z象 0*#A4 =N1Xťu=_=`[|Q~!@Rim't?~jyDBy5QjlROelZ'ͨ0vWO:n!& ^xͿ+% E<-v٫<`1Xїsw2JuȈV X?+ ^XkO`a<6q2؇GL).{3w-7w,:8o`TAj;~[o -k A!HW6wl(al8)$.2)cXb ؖ<#IIVLGV};0ڍcxY.5\)*Tu]v=[h]vb74X}HY!x5bRhs0S1 `51q:>?=w {lU^"/޺򎵗ɷ#pcb[Nϝ&K1pc7L~,ˋ - Ɨ k]L3_Y}V9_-|Z04߯Pp9q@ɤ()D5;e a:sp!W\1ƜkMzCꊮBƀ#PlF?Tt 대#)i<4z+Yx]n<ݺf.łgۮ٥] @= |}2E3N:FQ%RA (=_MxN 9~Gc4ոn hwaj;W38si=1b V`GȨE%Lz%cn[{r7cz'jjo)O8 p`5UgF)(y2ɇ9^3*ݔ8-2K±|W^ЊC.,D-i#Dokwj+&r!^Oj6L ge N3o0W $#80Lip83MA35/>M0y34#"pphizϸ1&ΈAIX 1J WZUut ,_lLpAZH7vhM5]K9;Ҋ3ޏa\fyZܚمAP!:Lj )(x ew5Y,^m;곇E:?e uΫ,jtaW?i["=_3>_̶=xܣ׺rZ,S[HkcºP >Tp nRANYfT aw `aAd1|͗yx3],gfy@oL6ڹ0>Zmٗ,>e~<Jg8])z˪쉗DR[/{֔}ObXΪ$:*z]3Et5sΆ ~IL;96UTstFQ=`3Vd=dvKхvse0쭵"EgG;cWRIT]ΎZ3p,:@1Em|٩{D/0uOky5jS[VIf [`ܻX2lwF1*Q, Pc8z Pt saxNa9XA0,=u #x=N=9ymN'opt"uo.+87ιLqkH 5^BnQXvJ~L k{J~*%Ml#(,R M%LPHPL԰ZfXĦ~oQ- I|\W-0&Tވw-K|~5lGlb%J{:N0 t5v`qœ1N7&&?h5q@n`..'#O Mb>=e.Nu8$U\s֌Syt-= 8?e|$O.N:y U!|oݏC8ÜG_ܲ8_u5KM}m.Lbf{܈\|4(ؼ"bQiryኇ0jyj}bnGښ:"f9S#k$ ? I^s-$Ʒ#5Y~_ڋ5pᒡN\xݿWs_},ŹZW}DņVbӰZ{)S߮ˊGn_W났ٹULnAOvb@t{>nD'?<>G$Kĩ݉s!ʵ䳛T( I[9VU,{`KՇP ~ 6nuj];/t#x"?y r 8i` afTiƉSژ ;*DU0?{F^l/aIiLf_aVGJrAJ]KRI,V,!Ys#yA[dkRNxZ# J23 C-[>*ܦJ&-@HK#]TԨT=tl^^JOIs1rQvQB3SrQrJNsɦ9F좛_fU"u$'0 aVWaUoevA:g(aaT% MaN#sS p?HC-~^TLXPT=ȩx_6^)C>9,NrDcw ,E~ǡ y(  6*ۨn lՂ&C I%\yj#4i"Wq )# JSf4K_f.)[[iћ%Clh^?anloΏih^,jGBF6z~-{~eU0[h-m(DřX (c!0  p(b! ujGIB|BD]G-]u|:[RË-] )(;p3`p֦8x0`cJ`̘V1 H0-&.BaĒCp]DzNtZY%gPEK#X"ስ"{DXRD2WxO 9gb,A+ C\W$瑧xJ!9YhQQЈX.,)( K2TK$ȳ֠%dH:5pĐǂsir*_Jp"ʞPBqγŌsѢы)0B>DO 1Jqdr譐hR"PJV.Z T[e,qŚ *P: CO"}:d$Th* }=b[f{Mģł'+o42JŒM93LT:ȑtBBLz24{P~y5ym9,OFPx4Dy,5D8 Yͬ<0B_!w݌{ #rڈϬ[piLRsCSDW1bPW8O f(- "*P˚ęl“ygʜ(%b1̀U x ZtaS` at5 U\fQfn:<ޏ>W|2\Ak{0W̾iq~zafH0ϾS}{38$Wgyi%s9,s}X|z9]Fi~za\B,(3Jy>I~_[1e'wY,p9oX \Y^8[34#+/v0͛Vmb&Yz^YA c }P<:3޼tŖ _k^+qu]`@0]spK֝5K|=x??܍˿m_V:1ds>2HmH7-0YZ1ɯZ㟚.Sb^z %_0~Log1.lf ,a,Ўƹ҄ 08RrgZ&{l _y_'ӏ,anLF{\>c>,d}2Do,:m2rgh $WJU`Tږ)nK5~x8vsO`4 T:ُ|cV7k"ӞW̾êg[%:RW iLJI= m V< /QGe$߻1q7<9[|u6_JzzgGwbFDߖ[){zuzi=H[*=:]!LX}ZmnӤb `XwNȣ+i@7yNDA޳Jbhrɻ~-M,wYbXgK; c%{awXtN#[>V)~qj 5Huyd"΃'pO:IFZQ>'xN%=fv?yYaŌ@RѻZ,zw0nf2AkeNNx{<YW[ $'ErdUzIv|6Z$m&)]Ju$A'fR _hѲEZy,JAKG'0p CYZ3+&]ͬ6WI;eB{9R(}Z /+8e "// J00m kH2]-+6&Q-•Aۼ/++ {%if@t :*+07׶sNN84JM.^ @]Qê2?׆ Qwyd\Zl}oX/*sIeE-Myp1}vx@wy7i"\BE9ƇZ$\&r~THZ{O^&R`7rBR!r̓KՁ}Ы\qdK] Ԁ_M]dĥRΝ_MB{+O *ߟ l^ =L7 %wnFTIOu/.nKTrr:١dp$&16u .m&qc{䤋\%߳vA3˖zK&j6ŗM3/kP90s(  ïX9:SuT|rXP n<l:}&a>S}28:0׃ d>0q:CK5B2cs8:ь/ DG_ A9A|Ng׳ Oa ôaHm6R|[x(ɐ*i)eH)@(^`!X 4~llh!ѼoPH-!HKTjXzSY-^cDa2Aҙ`/ |Q& E`[VЈLvH6Bڢ)J~/hȂ׮pZi)( L@-k ҪoP60 B8`̭uı" 8Fa",|c`d @PPcΤ@`R VX&YnIf-b1(T o +Lt f4||)e]Qpp_W.hz. OWKTZ83XҊę0vy~ĝGw195eD_<[ v~1lb,ŸxcX8_L;[`v~1STT&Cf ЊATGYaXpoX 6`Mď g/&lQHv$lQv$lQ/V经9pqc5RcIwɚT>ajY^ٻ6d e7owXJ @"x>CJ5dH5N`Y<utUWuuRaMFM6|Hz1/f9FK/ʙx%oN5UժԭN,ٝjF؍mݩrz; Iݠؙjp>!ݍ䭅ݩMݩf ;RN5}4AV"=b0_gh>Wnǁt{mYo:1r>;~Ͽ1vl4^%cwާC vRXR'tưLǗ ͏f&ը1(`5Ovp)=6zV.u6$!/\DdJ6[*uD'umm)ro-=_Ӻ5!!/\DdٴnTcn1n{| ֭ y"$STܦu("T NXF9ߐuKiݚ.I2oZ71XT NXcN_[znMքpM)Ǝ>ǷaEGaR1#:clcC szRxB.!9Sq8gAL3\F>^L;fn2 b`H,2E/4 p JClUrBH `u=.wk] {ČDC 8*YDV*Z a!ivXA,hW@\BeH)d~R8F4 #1 6LVp/+}\C{'3W}A\L&}0JU*"i|C`4SHt>M'^v< 3z<ʧnfvT62ۉ$"7+ Է[Ƌd1?}ٻ|>9=9DOʛ*nAfs;ܛy;`6a1–ʑC^9Go(.wsmn z7lC%֔x70xc,8'F`ld>8lLoY WH*U<`X鄱2&$g.TځHXcj^:AuM3`&*t́Qo^D!G VzJ0*ߞf2(NO?d#>8F%Dr ֜y1TQFj Ȅ |a!0%+Nb>Wj#lfFy(e+^~*ffGTf5`X`cYB*`{&3,+IG,[y|+xrn}0?]䑤T\doh)*L=] 6lziװҬBwrOo_ |6{:W#/qŽ˗\e;%^3_Os7x4FVpkUbSZQ|7Z%BYy$q#2%cliAsc-,чOq#ctd=Gc}k77{$^v `.V2'gs{椠Ο|GMGAfFkNw=)^gEA$pwkKRJۻ%fM)695"uVQ',B`!M& e C]a@mԈJ TމPi/_@X%-JӄuX;v [FS(/J5BQ(i<`s\^<05v @|p|6uaDeOV?Ceݕ&QgT9lؠ-(I5>%ϳ446ÚTw +,t MW0l 㾺?a=h*˵Rb3un(.QjvRo<&"yPT:2htsg~0[*m=CbK@~ b#cQ3ŻOw6l;NTuj>S'xO:`}*m%3d:k%9 8),,g\y9 )gرc RzY%Apb-ƈ)H.,bc`^;L*Z5 { k9)B.a+7TC9Q-J$C$C =420.oL5I:ʽס!ʹc !%1151}Dkи)Ahk VA# F{!,c 3<dlΒ0El+a/(/)j"-"Eġd-{\ضs۾p6nPPt0[p1-4HN I N>poNi [0‹!Mϯ`%c.l&Ҳc2 bS_h| ]7tߎ[pX^IklA@?{fz>4zx; _s?]f%+N06-oh#3[~ ݙ~'Ԅxgz,|/qf9Փ'lٳ9:;ƴ]n/>hݍ;508_Kzi( 36}x֞׸OZͽrnz6oKɬ)>HPRƼ Z R5:ƢVZf$iD8 s ) :l0_nh~_R&G+͊Z|NYc1z(sX?dTKd,xwbxE\kԩY_bP?8pE@"_)WmMZ2Qhl>NJ&t֤+(} #̯yTwu<|OV?o+HR*l2[qH;Y\^y mIak4O&ٻ7]L_tFm'/*=aK.yZ.{k6]w7Y YP)E!{K;iWҩ{?x@x?/gzxks8_ b4}UOL/wRy-f4 ~3]njMy*p-\h>ZLIֶJgqpi6 TEn3$7cIFonƸٽ~u,߃ATQǧ |ޢX0ƏAyn*X?rО/u9wyŷX?n/SٍBXmEwLOQV|yAI6׼XLa?Fވe(QL/bzyy8Zx].)T`R+t)U G #ٿlޔl8EAt.ol>efoO3Ǝ-+~%DxƳ]X(ǘ](_mѽ-|n`5OuqGJ#ʛ˫7&&޵6r+"eO[ba3s`svy `٤2$9IŖdn[-s4LlźqfiQ9\h%'Y?7lAKsu3ۗYG񾨦[%_ }Nkt_&qo"zh΄-Xrh+E(M LUj \Pmܪ%S;O;1ww¸48%hBvԃP #b@Xj @Όm1B Jb!ўF# ܩkŚRM-W;X\w (NHK' w_ ]E߇e($]ӌa$$SE9?-0i0'Ѥ/[k2T:V֛ʁ f AȪNќ@aȬQ$,gPf!>Ue瑫8]Rzty d㒝fdه4mSzLO]IWNIsڍ(d[4J8嚊g=jفjWU[DJ=oL-ƀ:}tm %T;? 4@9"l`:Jw )nm!Aq:u LgBC++8J^;c*Y^b::.b)@A`@C Pcd;69R/hQ1v~4g30PoI ݳc<9T"MP4s2Fd6lfZ-@lGRMbNCߓ-g0ǽel6I^76_ E էF6SmDӋ#o$nۛ4mt 4'YE5Eܮ6LE:A:Ug + 伵"d]伏E,sէf,˶Ϋ>4*;QVASm_t֍6ɟѐNA(]lN Ռ>޾TjH|yBQ)^Q:#P)IV;_:lH}c"uL&RW3qP$(MӐ mkk!̽[7 ִF`M;?Ɂ+*}ʀufo=Mլvw븻]UXN9M BdD"%)+3& @9W kiG^ΏfX஢-5Zq!aeuL1y<й4iˉIpJtO6P%w\x,ϥ,aGa L \Ri}\VNo}QrﮒB09/ LYӚ ȥ59X=' }8n\+.kz |i,6 [ûeħ[nD,~⿚+zyYa+E4Z[Ob/5V*,wEn(ɤ dA\Y(u]:P)b}Ҝ; aǑt@(v{H@y R=s2tMc;3r Hs+t4ñQզ'4$Xe !h.YYhwr M"::ІѻJJO$@q*XbuANHoUlbMXI%2t%k7nގf_N9|Eq;drэOY T%7< q/HBCg>6#5F*>C%g~b68;n噼}6)ve~woQ`?r52|q*k,ZgIh?N|o8{x]Glpyq<3 ^je|ESQozO6wpWM|xFn/>,_\ V$2rzKWmTKvd?9ҬRuFQ=τ!֑-/>fs R 4$2za~ LI)-ϤU$Vޛ3J+rIFuPt!@9"w>(! u\[ pۢW+4ܯ*wp&}F1W)J~؈fJt}H 3BI6=ioR[4xZlþzm&lzk$$gp6@MՒnXB0Ta **h 8!&'Jwbā醠VJ֊yp'4r<[ Ss %  .PMYC0` _MaWωǑI^3X݀;;b+k\J*{<%S2Pa3_ X $^xN-衰K'~ZX; /F\ c(*8p<&jqZ2/ :HP C6ɝWB@lPO8laZpA!:ʹX1 MĠ08nc:fP MabX`noHaJR*Sc7Ԅ+%Tc9ZH Ja`)_Ux_TK"ώD(! bl@v 4`\ 2 $$ ~%0sJ:H C$~URG]%Ɨ2ipL%01bAexտHLHX0Ahq5`"ۏ(j\k?Z?5:{{W YQ_k~ىd6De]l>%P9_p&K>|gUY= lÖ,Iҝ'K:N:г.'2'cPg tJOBO$:%,f[+<2$Wgə9L9d$xT88oko g~yQc(oכ'=SCs(@)zrsGacɛ㵡UiЖA۸2ıDǔ$7f>V\JbS7?V2qX62믃U}!@xӓ<0۾%NehE8i}'Bu g. 4۞W2 ȟXbc@i Q67d$Ѭr5X]^%87ws[`~(Q.ϣ\Wr#a9  c--ㇽqagΓhA9\r=S,/w\s?~m5AkDےWU{?~0rmY݌K"a-%E7/ŅwX[u1J`tDN4^͠ 9>v-Zs4&"c2w8?᣷BWqLRp1@Ӆ~VK+$F|㥰KgHbQy9%8N.6zZ;8 ̛cNFR{%*l$dQҷO#*y'tHOQL9)F7WrfWB+'b8`i BPBZR9w²cxmos. !WuPI,*"h(zUWIF9E-(MRq @42A*sޣZN(juH1{#MdK  [NiYQJ#cxDnvߕ$l4H ,1i#Q(DPړ@ED"l L|8`n OP-bab6$LZ(a LQ򱣀W?4J!w̋4fr"muAu8Le[v )C֘ɉId[cKQ'/`kuPg?{ "QlSK@‰TppRo45"EN![(1>g#=DIQT=P.,{@>z7 1;@ 2{@He,QWl4 Y)$;! >I^[q'`n@8 4  FZ U;OҬcL!vkUX3ʻ*]q=\ֺS 2V%+qOs&g !cNOVI8 G#Iަ{jIb6Br@ UօӠLj-S0 -8Xq6ڟ ۘUO ی\ {saBi!&o-\ү7snL7wB]&![|rW{%JaN$+zN JYaIJ9-X( +/2\s|5dZ*3=o̭1 KWF\_L'oe}89SJFJN{|f Eۖ;.'"MG^ ߘ .85fxtJ N$*L2Ge8pἪ ՒyoCa0\#aR+ ) [Ka* (qBr|Js?~sanre~n.Ŋ&͊)bʇ8'K6,bdA4L[[&9lhUv48Ӥ xh+8ˁZå=lzxusg"[۟o0-҉紇z MKX̾+Gnm}y";4!Cank9#XỷYc>p*:5`ܫ2?1 2 T]^[-Tã5L`0FW4`.` _)_k$[qʐ 2ve=f;x6/ vB8dZݳF'}vؽOŇUEnMƕ^>wz5L8d1I`PPpeH&N=6)lG fX^d;*wIƇ'cdFkdGVf6?=:7;P闣 P/,G$e>&[!8\xL$L"8gk gB8 Jxı kqJ+ ;mU+qK;?קY܇xSJSSF# d5$;>_ÄP LG  v0[ d+Aqa8Iá7;P^ޤj/j];Dcd xNţ @@H&hAW&@_"åױvvZ<0lejQ¼~zsJ:(m}<|҅[ UNo߈/7bGb2zQDu߱Q6nQ {(qT>8B<فYcUx Z"⭓egE4ϮWᤖ܀ JFmm=0w@Ws`3cO>!l1KByOG9zYxƈ[0"V`  gΓՂafD*Ȅ2SBD6^@pǯǍHsj|:~*P^]9cȝmfEB,6lyk7]of}sLg{sw邇gcٻbxE76+/w7~U*׫h=E΁zXٹs.M`/ܥ ~u6S֗Nz͗NŅ&L" *8#MdK  [NiYQJ#c@9,-T[5r U~KDŽn2U>PbGE4K:n$`r1Hotnz33^ [6֭ y"%S=Κu4 [.).mS%\fݲ Mn}H+=21SM)ohob% /Y; ~|csʯ奙~[ջ/WZ +a؇aC+e*SY?^j~jaB+U!f>|`MjuTZcFHʒJq,:KdRtCO+d8VTϧ6TyJY̨T!O`!-dz(+8Q?Zt\wbwt:w'trdb@S0xXR=+K͑;zeInQ2sP|+?mYlvGE,r0G&ĞADHLu>TٹN*Jhmr2eE9n˟fb*4z}_n_C9YvsrcbF|w]fUYr$7_on~P9=g 4H2m}ysAb$(:[`zgn~i/Ƒ"]̄,Y[;r |:|]VI _%Y=3=Rzz{׀5TX,."c$m;_^vt~N}*Pw|~\|.PugR\4 7Y˅v9/PAZm"xGu)bc#b˛#!Th\?r >i҇[mr쏻7u+uuo]Hi;66Ʃywokn{eRې4Z`݇75 vͳy헅hcξDڛNO<#^޲&-\K(DS" tDw)MsB:,yۼ݅>"W:Lm1Ģj]`t]@@)]a!&pVBZ5+,I&eUFK/x/K׮3*=/f58h}"mH|:-uog]ef? -iٳgVzμWFZ@|Tl(Il ֑Ԫn{428j́Xnnέrb)ߣ?~m"_eW]כ6x)ׯ3[^%"|Z-JI~c_GcwsmkeR@eL 1@ E&3!yhrb XU; 5} a=@;vB+>ֿ[ZNJbSNU2Dp))IBEf>2,(XmwG+۹YOx6l^\7&zbcd3.P{90\@^iODT&h[L2s8JN_9>lBMe x?{gzkO#d?Fd9]QxkJ( 7CKl&Vo_f>Zs3fë ,\ 觿},v7E/ONz}ŚM ۧN_^ NIxqt/.x}¶Njy%Gbx,ZGdz3N{<־fǓKk[W0ŕCA!=K#.{^K"ne%5sB Yc5&SH$ˬ10p$YcA#z.k.!c!;\"oԂHI~)SІ̎~s$ށ9#K0hx cf4*'vYEV؋uFP*D,&.j9f ] s^}Vn ?]_>8=Yd7̪wrM>}֪Tا<_  Hu%'A&Mm͂Pfr)Kbq?IZ c{ݥ=ݧ PF+J &HO^zOC>?YbȟǔO`FLֿxv[.G66ْ".줤1*gbcFقbj[H8Ƶr%  ,8XdgP,VFXqZ}F^p#9{2{< qOw մgr>6 7# n,Ώ5Qs?|ړ1"e)>u~8sj`li@mi/~֔>C!~MJxPWb3ލUsu '7 ]743J/>lyA$5]5NumW&պB;pƆR"-P ;8-VyvVLqOjG ,#rQ"ɬncV62Z3vVh{7Ӷs6>& 34l0/&u`!&ӺEsu4}Y9ֿg8y?z [+hżh%nA멼y/b{MIֹOL(fI'n8w@`'wܹfA7*uye`XkcQM Ӝ,ǓT;/;BGvP9:{yU=F"KL~nJe<}M8_4"G /uf܎Rux}eJWoQi:s-:?U`,'Gڈ )p :y7#_y/GZӋ-Zd<ڔ+b^]e%oUn䮗{X4풐iHΙuЭlXy#|`Ъ|z OE),*c^}6|^7dɻk}BdQ#s,$RFfGdZ>ZC{4:Nk(Q3!ܱٙmz$Ԟ:u?Y08aL=xwD'hΑuБxo* v͞& ^fg\&e43 'ҝ?VIaͼi󸊰WB|F޼2oO131$6sHRe 9NRyWJHY7ռr<MwnG?iĮRw~icr>}.7uAK40IU0)Y>% e ڨepMW!$6.NzE懬wmPƔ &A˦! *I srRJx "j{;@m[u''z)*vO,YѮx\^29!!K:)ME_[Mh$k_6`S\@ohAj SĀ/YzC$l(F6almRh,ΎC { B1 $y .^yuĶ8ıLJL`nVI|9]XuC/L4DNۈ%^/"c Χl Ieae~kRu{\?׌Wj[o{x%w8+Ϳ#!^6t^$~3. 0Vjݑ#ջfemPSKr}řc <'yEef#Q1%h1Fs ]O@ś7v| z77Ru܍ޒ 6"GQ#!D6@Q@a;!+e}-KiucD+?3>]CKRC{t m3ѡ^&,ɢ`kJ 9ḬȆ́nU>>J2AW 2:kL`H_ثdT]pJ<*nPд`'U;?h QI9M诪ϫCa/ȫ__K8Pzp{lJ /674^6fްZǗ߿O;܏r/c'7<4vG$ICK;&6#NٮoHa#^m F;$6񻑪-k8؁RYg2sPgmڲk{,͕SZIޢ+2z )3P!jd7Aȅ!^ൢ\C `Msn]sr#V@{~˻r8'>?2gǗ_WuQ/ڜ^6X($EAC'9q!$(xWN/4Pv^+¢ YH\Jۊ-0OcxOHnoRK<ڈ9AP AVۂٸ,#`X>d:qo]'<`?gV;!ڣNYhŲ̴vc@t1,?ʤD>eSQA  vgѽ=O=($L[eYRIQ5AV$[̤H]B ,g2  ?>GߎwoOB@߃و^ٻXt!o_9`_ f':(./?/|O:k] n5Ӄ^,7rOXk!J`~:a=O0yj:3_j`O!7_UN?78Owj|篓ʮ87XT+/hwWg$ ;f Zhi?ogGq0^BI+GETQɅxeokexeGq{ghءB[W+OG yXL-~t i)5RW%4L`>aD`tV|޹Zq2uqmuؤgYwLa_WO>D|%vxB.MэVۍWqL)oo?Z2zsUQ7z_agԺa-hnF͖7&*=_ۆphu3, JևN{;8,[W Mh{~x Kb߱4WG+^1mD~>:?Y"J{c;oez%_41`_+F@KY]Rm岉v$ MMx+QƶzXN;xE_F0w߿ݎa!߹Fٔ>n Ӊ}Gv,,n4ϢzcXwnm)wN|\caUz빛g_:Z}{G-FMrz9ի/}I!0z8Pxw)楽8.w?+⑋;xl$vrW˽o  \o^|.r[ϟ:窱;mu<+ o(gvw=FԐl6`D"E;,OvPT"0ق O4QAeB# 4AvKJJ&Q=*Bo``;kx{uy}"y͠7ˋg`ِ|!i) >W+',,eNQ'bct<a`ˣp#$ LB:5!I O ](cb~;;i_ecm˶iD2y&?vƨFDFM'l@L?|V Ĺ8d4톼u]m "FZhW)ksH{‚kl(0GxM45cTk*j,IW0Dޥ&a>Q"f%RBۊR)spv.u{n#NҐR)J8>\Է\H"Z,+W%,"PƵ eXŖKHJ0C%D[&;`4ec,j.q4S3K5FU,R j5:%k' eCT0 SI5jQi-*ک$Gbaݢ @b5=:fCZpVsX2XXV֌3-k$uQiXNT֤b׆k#k^|G*PժO%"jVĂa:QrdRH+↔%.+Zzfu;0hoԳS'[@LŐg5(µq,jI5:f*|Z#, 25xri-iBZɱ$ }rEҩSG!QRbR͵fKnz[+=7ܴqijh 56ZџfjlpJ⛕X,UaR!لMv[%>l@GeiW[M 1@یc)x2{W~*ȝ߸9},b(?zܤq%á!B4NA-,hȾFȣUDg8⢽% VJE P)su.JJJN\<x_Qj @{..p}ϰ@y-g,b*3I1B66>\4`tA 1FDݼ1-!1Wz&1**SRVrΤ\˜2X)C4uZzbI]OcՒJ KV4*J?o֮U\W"6Nii9ŸtતXWe>Jc*2]'!iVZ$ #"}LJ8J,Ē INQ}k(%RCEJ %@cCʶ'Pu_eȳ8CƦW1}L> ٵ `fr!VF+LkT˨,C.ɭt݅Sƣ &f63гY2(QV!T{/FZs.pg2JJ4:!lI˳v'(q+Ce̅vm5~% g \jR:\X+yٺ%wCw :(Œ.ҟ$Ԣ)aejpR "3X:2n@݌9K \l2_Y.+VHWsO:8Rܱ}̫w~D {;!ΈQ/y[gW 8B?{SLs,͕ K+҅wk & 8GeN X0I xDH LͽhX*Yj|m]]1Oϼ)h_ol*#8@,x )q]ޢ4,ѣWo/98}\ A E81M]<'r 9 O9>'pɷrus4ݥD15Xdt wO1{s#HB'XҙS*ij7/bDGIAik0:I]8YM-S$S/:i:,b?B!d> i銸Ṇq!V,PF낺JݽrwsNYh#Át#sF SҋEEʂgyQ"& Q9A TwAYc,0uwIw4?!HJ U[Qm9du k;D CFpŤ ǯ/Kk?A1Dh%X8_=$N#YJ$jNksy^k"N~o[tF)(-.qS S-t0$iCAh>$2%qjnQN+E<'~@ayt.>8ZJ9l°A]cB!n)/gޱq Cpk,bGOv^^^G$ ߧ. >MYLNx[' GT!mGws@: wnZ6L+e=`vb]@;Z!;҉s'GGSYٻ6eUT:7~nNbHB\ qJ\YP,$! 0oJ]va+}c3ӭ::r44ClMЕf [U_f19׶LjyɡTb { n'8"2[ ܒ j땎v!/cLu' _4ujK \S.w*" F"Dp6,^&haqOXU󋱘gY3&ܺ`ncG"˓8J$DJvX2E\mTOE]0FWD G'@|B54*bH^c@cm\pA,?GtP1ֲ'[G}؝?Fco0j+GW>0U .dUPQ?Xf}m P'mT]¡qt$VnthZ*UFV<ў&r K"ʃxqH1˭䌟 -۷$}˓GBcD4 I6U4 @jಬF<̏)AT, G*/ձ7 h"]?kcAeB9>b^l X@ [Q1")AeN3֛9 G-/pmRRQqZ|+1/EǕ[wHf% %1{q%XP KT׊2DUOU@j-*ZT')N\nJ <:q{RGur\zs E\o\0+}8"Q6q1̉_e\Rkz%mi_ {J _6"CO"IUy&D'B4j:λIA⸌\V6aZfT&FVa 8-^\"s[``|> aqQaʚXKB`6-:f'ruz3\7,93Ӝ #$<1Z(h㚹˄1k8r|B׻"0uKN8!\Jt@f ;Z`k£#;{Xnke# \ i^NV5VPWP)ƒW!ykZ6L AEcT\9eb4'9 D9,+ LLϣ /&s8ԸYcFqލZq8`/#E*2Rr"Ys6,u֙ }Vno,&Sml? IPn Z-z;yMѶ]ܨoZo*D#Z2na %3r+d/'[y‚t&ǣ+%Shu_.)ʤҫKcs|l 6*Mo䳩NFE/+T9Sdb)a/n5 :CS5)jfΒ VqS~4~ FĮn'*J&C/fE' Uh"U2r4u.ƗX)9 HItNd Rc$+Yn킑}\WN9DC'D0 J\/0aF8Q4QRnDKH\>/%|vK>o)FqO0#:ϸA=Ulbx91^+, r S_73QI x=LF%dZ[!Q}'D2AwS^ ڵlz;y* }rj.{Mg5~GpFZI`ʀ~㧝ǭ~3Ͻwn<Է=?Uq{kϟvGiNk; OۭK^cD.oF֦wc~N iLZOݾ];Wg8~!|P3T_}==1o&=q.M}ԝ 7d/<;U4.ړѩ_Oo533z=sron3US>5{h04rlvu}h=W-ϙ'ɌwZ_.MMɨ= (<}׃_jcÅ/dLUk_KJ񤽾@b-jz(烷!$mݷw']=sBw!ퟦ\88p4pOuNG]?/#L?!cw6^m { vw`7kQop8vO{8Dޏ}P Iou`^e.7b{>o|CwG3}<եnˇaS{G`s= '~7/?5305~ N F.U# mqKX78}Iwֳl]ܷ ̺?w F_5uAהUB]eh܌&éL{h:CfΞL쾔ٮYO}{\L?)^vkowL7䭵@b;V ,휃: S36g5h&WWZkm]p␛vֿVY1s^~ۀJJ&x S\0NDU`NCNd9LPz!(u`!Z 5\)0Rn &< 87g ұ5nZ*ld6ۻkz |}^ƻݍs?O>0?w}/IՇ4=wz/7v5 Z:H0bl&uZa8W J71?7M 9,hpR89a, tt;&z%!1RJN)"ֽy{;;E0a榟CʓSS-^_5OZ|ՏYA< !  !;ڐͅKJHJ>g%1 0 NI絵L($ li=PR e/ yHf)2KG3y4TfVҽ`p0zwӽaLmnTc\yCF1C:̅ȋϋy#3lَf%vَfhf; : \EFEDscuf5:ΑZEe)C&9M@'&䕐dRT(Z;L4EVح -G2Y(LY*QD3kYڳ 8hvϓ I<}:x9*t(,4C=r nqQQ-g+yILJ**S22bH4]@cRԀAQ+^ pƙ!}0y?ka.ˆGV*vvFô/ʂ:Zғ HsSqeVP4k)Tq9 zXPyGh !SsμMXm:P`j6tyd^#(:g6rtaiYQҁn")mQ.:b_^ `N8cr~ìaO+)wE;9c-ôO𛝏j" 7P)pMN'ĻYcj!(AdҴQ*T%2 zJ˄O1X$=rμ8Qd@!HR @Re]mhWQJXڣtIj[(%$&kNqu#=[̫)GoּU ((e+ճ?^h7rN@< xFE̋*} Q6KY>f%n(L8Osݨ}BD Tʢ&I ih- )Q"1rYH4^FdTlӖ2[/؛/VT9 D-Y)f]`"geٛmd+:%{_3y]H^ Qo;qncڱkItK4g(Cd`!D(>x䋔IGM6;w CBʑ|dp  )fG.m@ߓg[7 8.IN(mHm>S_fVQ̻{B; 3Z'`B1`pZ+2*TyA#b6\USW8%UndWz=5\$=L\q+Vº;"vtU͉O,4]oόzHwf&%B&: V0;讝mpBC'kΌv̨G!T9}=3;f:p-sfDȨ8Qhk\=D4r;S}r}8r!GBR7VY0ڷB)'K$8ZZM|;NPw±v {W'QRQ &GCCI?d.\K)Ҍ9z'Hǫk2}/:;;+s>(k-"mօ[(V*CvTm@PKcڌOGM+yJRtŴi+yښb+vFN!X(wC}^.tsn{<۩Dw~@nv4IYܗ<^oh-Z#e!O0-Qci!#5F}=L‹3A .Td]H2\% !Iw(x Ah$4A B$]p~&5A BMڜ ԿR=KkЃӅ߬g9 Aw#M"𷐦k ԡK-4X-t'+jJRvh\D)ȎFW?%Î(|?k 3,d/u\1@)"+r%I̋dAcNcM>$Y؜Mnp,9Yx>@Y ;Bt7s}n¼V=2)S]!4,d՚Wڢ.'x1 3v~@iun׽@~a@ݯ `K=W?/ԛV)]ym=M}$B9VFYrVRSsԺad՘Br'!F5ZsP\N>1\W笂%nH㙧M=$qO]nlg8 !R.X &&mX~)]2%+YH0틲Q Bu\O׉}zOI_v2iEl}Yna񊎅8` ygi @&,7)aF4DS$#BO6zd+'ݍRt㙧 M8$!/ MhBF29!c1DY)~Mӳ%Tz^@ 1&K. WBHV*x'?<,ms/W9!s fk]'I U]]]i2?? 3(~@J.$g>x'/|tUL A=~wo?D$iz%> cIlS"Zv%T],md?=?|zد.f3]\R0==nEhg޳&6$VjYEoWӫOf~T|m,jOL DUU8_n rvyv{M[!;*LrWf2-ԛ|$dӂ'7 XTNI7,osS[.G@eMZ#*)pET>(MJ U~->HKA'sSXeThB*9 . SE&MA*KT Qsu,svqHw믮{sv=ڹ=¨ vd/SKxN0b֑a!([<=:!?X)mcԇE}B/ALgFI>גDu@1i?˒X0ȀhNPlPc@uNSc@5Pc@c@sΙ,G$80/)c<+9Rx?~eg@(p3:Xj%DVbvD֞(F}:M/  AZoE  r.t I2RւIA{nژ!96Mh!MhE-hcۙdiYɔ8"8a1c sRkFWf(Ȋҝ,ZvMmoOׇYSUYvgRWV h$k=퍕"6JV9,W>21r7hiqC@ŻTjqƁZP@N@AoRRLe sy^ gݮ3<; w¹U(g No_VhG#w+ \՗Y+G E*y\ '$&:Jxn RVnt2Mh!)\}iES.rє\ltt4!/wp٭K2Q,fD€Q? 3=Ivv|ǴG"jNa3#{3>_p^~7/㢕CqSsma͋9o`h~~c??Fg# =QوB>Bw7WjIi9Rdr.fkن eJr&$TJdν,^uXt[_Ň'.0"[Dj:f^ǶUUGƜvG׎G|> N</φ~y~Xrt)x)hj^z8Am4LBС ֒(I!Wo{A$H^!,)-d]FLvf?]r&c,%+e-V8Z.Т!;xc2a4,Vě%sћn1zloo;Nc`7.b[,6[!AKMX`)TRMawIps7u㏄K-)-B伺SMSy=`|,U?Оw)CSt&O9N)(M,RkQxMԩ7 kUx йqnUrn0k)`z& 77"9\pޅYƝI'm1f2 zE!6x{9J69v3eHu,C[8;8(ĸl ʹ@+1`z' ~゙CF,{"-Oζ (!0c4+Lv_!ڔt,6ظ 1Ll 9 @-~%%00ݾwy&MISe3'f'zQ pҥ4s3X07| ONwB1˟_?4i[o_7|)Pq0O.^]e{h]:B#PA5sϷk63$#QoW:A?wnP6`9Im8zT4X?t/0،H M!i)V"- f!qdc)VicB.΀a!#V%CYeEįQv[@Ig]}I%oZN(jZFnh-cUmtEQ%NbZDnB)6Uo ^Ҝ E*}_6Q/K%rlޔiO14eLI|I 3rDvPŅ!A0vwBZf(cר 7MK[ r: B= L#/̣q*ܢN4*\Z=@&sy1cpo%;(xl0 <=r0gƞ<`rLFPQ\!N(zH]`qDut<" j`~t젃 .)n;I',0&aB> 2_vޯdB6̬X"%52q,jL~t/!3rLQ-%hT? JmA'@p%Dd*Pg8΁20{L⍈ ^(@yA߲yŜ+^T9ʠ& }àY TħƨiJw (@+(A+ k@{Ï}D_]wܠKHĶ{'C+(CUa3ulC8ϫH:"\%2ZT޻wnPQ݆OLjvRAd[/!Y3W+uuhn-5jǃZ9t~ wܠo}"dn$?~= aQg3QyJeb_tާ*l]7׌7)RYCUb$kZOBe+Vy7?#|&8$&wYjkʷ?c sWh\ cJv>ˢ6S9;ͦkilk3Jf6P1Ka !׵-8͖FFs"V).l2N¸%kMWZs@ݹI%dhCeǸXo،.G\ Pƴ8K+~*5$A.XwO"(";wٹwvбCVBTK κ)noo쵉RsQIo0%026ɔ [pq.:.+䫸O" qDP޶l4rD)I:ֹW21`H?LcqKe?4F'MNŅyR)y*5IEJuneMbω~ @@D R%d-ZWm 2 0uvi8Cw`wzPR͙ G%9/ hRu80LQӗXgQUQxoq -f*v[q0O`2gj)9V6[ Gԃiya3 ob9X4;8NB.^ OR BgZ?0KFtoFFgX)97g U{1d  fYtpTka.4 1;C\S(qL}h6 A$"K8X;@}. 묑ƒfƛ =ՒfDI)"XSUUՔbffHk ؘGR$̂O#kDF8.,I UhQc<p)ԍи |L RX=Lqtа%ɇ¹DBz7;5p5"n5Vp11izOsZ纰-U(`M&&GRU.(+\pBTuXEkH%Ԋ>J085GSfg9\8j:1 #wZjWzvsTRZlG%rmeRǧ% L"pi)0IR3LM7-hTOKdaՙBr[R˹T֞V=TQwBSw$L <05&:cS[4ʶuAT58w$WtԤ@@CjV$*uRRdcBk c$H[43IjPL$imd X"^5/O52Z;[k Wc\0,y[vq{j K3ݚozQ@q ן-Ѽ@y s-t[q4yza[<^$swk|Q#g4ax5ſv0YOrTgYVE2^3sqO_p] ϫ>D-yamYtA1yzϗ H2LU~U^OJ UcJJ*ުQg n^-}#ǴLn5,&!etc ?糇t,l͟f⏟vϟK7 yWL$s"UnϿNߞ_a.n<{5m*X,]Ƌ@hf0n;l 69*1o:%9꬜Y (Z\1Mxɼ=kd]HN) Bxc4'\l@0XUN?ɷ,X`$d07NNZ mQC*ZF`HӮwU*x#󽝮cyA%0_qxo͠.7۷5uI{{/CJȲJOiH.x颖>*BA9!X])$X6yl^kE.mŮ|Ot9N'o_5H&ˏY_7v9Le\̿Ԙq6Y+2}o)x) l9.T'>F+m޽ԁ[Lƹ}5bi $"g>;E6OYdֵёO3b[`ñ5ika&RlyF2)3QO9Lb<5)ZR;j_V):#8l'fdWo%O;GKۗ_FcOc"*Xmi[%>7n{#ר{}<+9R5C}YJ#L!2Rs˨E9ְB\#q,Q{^9~ )I-i+`^ ёRMw[:QT.$tq[pĮ|vCzsit䓆AxJ}e8"T F HŢFK /5;Rp8azΔ|* YjT%E8aFD Xoò(dRټJ3YR*>:T''] ݋y}+aPh,N>Uz)4ewK)Դ%/}Jv3&WѼ-9*_'>rC_EȠ}W=;%J0$X<Ͳ۪'0wYdnץ8h2"=l3R$皳 mY' P 'ױmI+??Tp*)r\mƵ^*ьdž8yo(rQu9{x` >wG+_1he5P1*W)9JN'(GT2ɑwڐd8

DN8"&cAs<~s}C6Whw'1]~.)|FwQ@ipB\>㭶</.0x,NKDW/<;pQTNDZ_k F"'.ͱꩦë Cx C!:V[NVW-՝<</0k-E>?. p2E+? 4X~}H=Qd^+!M2+LIrb-RAcf>1 !g6€KLfj3I_MCJV+:BM6-_}uCoUVB=)\^a~#"zS0R[<'Z" W<"?|Wˆt>K0>A>ȄJ(f0' [Jyih5rGּd* !Ե"8!V㙑.^Z$RJI@5wWe7Ac^28ǞRwv,PE83-K+E7qo*0Bt%-,r<'F ]N$DX6K} Qj }P/GJ1Zo16_09c:Ϥl!benI*ip*-Vkyej\OTD[("T_Y"%dq\!IYfIB(\"X 'YQ!MG+~\ZP8_.Zؤ,%r/ȅy/A.~Ǔhsb?Y4/<]Ls^VР?sA/XABeZd&GX<9WJ#|'J3Z"R8":".1QڦZ=G *r>sǔ8+"j1AS4J F_aʍ Syƀ5&yŽ=EJ|9dLe+ϙrsEǾ.B 3d- <ÀA2Tɚp)x9XuiVk㨣yCBXFWN{x/&:6Y>yZ,8m-NjttV3Fε1^5v6Tz'w9F,c-:Γށ?j}-D(;E&FDki-$IvVTQ,WW@v0Fۓ{'Axq07/8 6+Tjw"@oȗ*@(wQEci}/GdC=~I ߙiĥd/-Cο;m^8x߆*+RkiԦBV%9ׂSgZx6_![ 9OQocESCiF~ dɦ"іlH|ߙ; 󁢂  >ܵZτkÕIIvSªk-5A!X[6i,U|}O`vFX\{X&TyK[kk|r%d$zJ/p\ul-G*#;u7NR >i3+_/>m{: i:Z@Xjr?3D"+@&([@C,L: DT69c|&U/S,V/0l?غ֕-{ V2gx/(\w/.AJ|=\_scxw]j21+D\}ϕZ45ӷk3V).8cP- 5fR./,NoL1RN溿N|rv]wgV%. mv_`Mݹ{Ҟh̶\LszCx=9urZ+0-`l0*c*un}Ǽc>*g07WpP-r}euj+Cu\rwǨ{ LxCꞥEe˨?nj"|V5+V'fE}F{Yu ]neiPuom%ޕ'C v_ϳ,L꜅y)Ffˑh36>=y\}}9,ǷRu@=B-ʠg/i2o%쪓-ۺf (.T}-1чN%[u:R䉇ܺ&ۺ0z,a|뚵>{kU[׬a.=WՆ5~]zS0ެ7frޘl74f._Ů`Y<ʱޘj3c/E]v~K݄}Yװ]:kP%+78 eA5gO|{k2ʍ}M?0f( h Z<",X)諵pg!úOV9/wrI_YL5ZjW{h=sYtQY$&b|Mg`N~¬t=ORSiȓ"!^$㚂t>0ݙ%ӟ6Èb"+Yۨ ɪd®XRB)jsdSBwiQ}R<ԈXN4<C7h9aOGK$-piIF_m<F~=)Tc ͡{ w v>`WM{1BO;EGuB5`?D V;oˏHоjxx5yc ~ G}wz"kuJ;K7t¾vhVKxfcƤmg4iМ侘 'L:PbBqI ]ݬuU]f^Das[G|ѓ_Jwn1cm~q#绬9bi#-\,k>Z닷9 wE}( .|/u:h;i2'n^Y5Q>^2kM~>AԁҒݜt8vfÛ?j977r\sݴWCu§>;z؉:I8FB0Yb紫&1*ꧼjc<Ş1V6X&A~fXnvdjóQ`/Sz}ui/6K3L9[sOr0VĬe=x-zl; T8ygP#hqXeS=UMM}e6.rA|:n XeEdI de_O$(ۃȞ5IS'ia;?:P'A;pvLia\v^{`jEUЎzp m*ţ#M3޺?. esޜїw}cMۨ{pl1 ^od9 ]C>9T_f4 aч-(%5\iS}yPƓ ̪zqkWZբ}E㈾+ZZբoˤU$g+n~ p0%!>7Й,G)}?"S4űt{qyd.>?|h2\ѩ+ !G؞w U:/V\gn!)YֳpHd&'2iMȒ }ޝ]?T{ o?-9^7o_G'uv+oRu`PC6键G[j5TXܭ0M&ڨV4n<*%0+7 uGf}ۓ7ɦva4CEF-Y ݲo{Mוӌj 9ǻTm(D,B]%7 !=BOĮŠCP&"vGgCQK7z=<iO, H-hR_o4D,vzt<8#okLB0eyO)O?SWrM?uń+(.G Hq* 7e)- )mx/Q 2RRѮl y^yޠ{>А]ݿ\/urݿϤ_k=4׿g9&4@Iw=b} phO2\y|BmMd#ͅ|Qѹ-.ﭕr? N ^p&sY08LC Y ĉu:7o̟~BmmlJN#:.T}Q b`|   !Q@% a78SED5G':PR14cӁ)i'oZk+fN MCMrlOCqČ輺1wCLgDr^|ϗ)(T#o]%K7H=kP!peSw|-ڛyԨhO7h鎧TY p2c.C7.s1XLgqCGw>982>1T W^6n_ipc!3>r`yP\K V-m;UZ^`>T?n1* '=wT]^z+#q&F Oq80>da!AP'!=$HʓP'C@3 p,b8 {% G$&7N>}SG̾]Xa1wssqr!p[h?O3ҋ2^N'k2 C>tMCeBlp<؄Pd!O3vJ&Tn2M6NNCm0I6CiD!]'ZX>mdfp t/߹x9o'YMk_|5g2 I^WYV\q3 T*@]9uؕ`u=8x$ѕKx W3![644zK]ʾP2Ƿ9baB VBJQ+HE dRGBq(`/:DŁE" ǂjPD L>ޭ_ bmWqC+knH] EaL8vJP9@8k#:4=Uc2 8XCx`go)P,K*`w" bH|Z) Ӽh+Dj4f:|{G/oLC/}𭳞srO4̤Jât z::e2%eE+@pϯ$g \{GF7s8h08|\Gہifٝ"ͷjfNe w UFqba,$`w,#22ӸU9!9Tz?Zbn*憩bn*Պܶ5fm8Oq;e5倫%2l- @2FTM.$FA FVtmB'A=3lXY۪jG-mMWʊ9<DrdqUU"*%*i}ݏE%7 h?XnqIO)3ɔ#Ψc٢,5tn3%[HnaQ_%мK3r,TRE@ QXBBDcs eGǸ zhtݶBݺv@Fl] P!C NY5F:zC#T)}pp0Gr[Vdܷ2\YZS2T{3|*vՃ?5l-2øՓyVj6tyY>>g2 iL(E%g/)t-GfL$N2ؠJ*.4EO5y[{# # M*bP?]4X^ .\ jEyeA! PZt<ʒKiw4JC$ ƛH5BA_x]PQEJ;>HOT&(Dle%NrX"!'V* eG DR( ?z.@Wʕٳ-&) >)0,aMƲB,0*6NMJZ@NJ4BLtEM v`-HO"Nr˵-h#d2P=jdc LPMBo .H+ L1v*T>2(nPA"~QD}BZ*d bDDЂJLVN\yߒo[=h7d: K9P<,~kސS}V:knQ}yCRBh @ X%:PCDYmĆw~~10u3`u$Qn-}<ϴP w70\DRc=nQI0ReÁq'O.Dn\0q?҅~}>/V,~^M9X+ ))՛?QfjK/vJv| 0pX1B)D\ Wp;ZC1 hr0p?X޸.{;ttv0VJۚS+d#B '%3e4PmɘD]@ @eXq1 QTC25⧵4{xego Yfv1=ICcl!\$ܻt^FIV)SgG ]8"7oNNA[m+36y(:1 3% F Y8+@ ޷B 4'+kEi 2"Сl3j}gi%q̶czSWs'4,e06pa ꕳ{V&:*JH@Ϡ*zcJҀ޽O=&3J61BzR\ֆ<A5@~J)gF>OR< ˲#, &miѽ4^Ve7?{ll\=m{5;?L95O22G> W׳I$+tsshݵ>5񟮮t6 L_5ǸX 'ixA(rG3ب ] 6iĶ6Гr@%'(-h6RdKMT:<"RGBU.b !E-ڱ:Ḅy[XY #|ۅ՝ BTȮ[cuXxٌw}:m~RSk$PJ/F&{$$Q96N#H$U)̎bs~kb9ezzhB֚xnߢ) JjHE -s*$ z9A9H%DDi4:'u"G~puRn.rw-Ja9)r.`Bzq]9X:Jr=0ѝpQJee !!9i 0V:(!U#}dP Y]*"BDS$c4lZU ҵiQd]4ݻ0-p2t9ɶ#hjUe1 Z{JФ^܅ۘ6/̈vQl.jxcKWJ;Ksېeo1v%ݱNH*IMv5-iF4jjqPi(Wz)C%R4g khDQwSH7޺b[(0Dޥ$؊?MH.yQ3S I1WhEI]kNmzhƵIeҤ.^jDʥ8:(͑۴-goru xu^ Jl aF]Ͽ?h֯>Jt+W&Y2 J(?T[ ~ I,-bIJˡ,؈b(4m ۼI1aU\v90BXa!YqU4),'?d{pI/ghPq:`ݐ0[6;F=f-+yU-qlưQIf(SIS SU>uD53YʲifNV;(eԎy_1rr|eЗ1߲dqg;NQT!tEwm.~!-LY<y0[̇՟gvw4'`'V'P~og2;c7i0'fsssW ,9 ЛܟS } kp4bi9)#z98@Ҁs?[L1XE`6އŇQ1ό rGM[%}P4cMDc'qעRg/?CerCSW?;*Gݰٗ烙\nHFM܏ Q3tAVngN]WZOtlq2S>i a S64 d[ N6-' dCbn擞j8JoiN!0i0 Ra=Sz`'6\1}9k?iHd>dY!kKrBbbÚi~ =x1osgš8uF1W|gь0CP.%?PYEzK&H_ϛp*^pf^ݟ%)Gr"w~PLNN(rč6KGO5P?8WvjY @q/NSfȜdPz]A8tSA8tq˅7#~Քx&4SYR+x W(mŀUuBq?y_^fJ3}x @^ܢd4߆~-f:_F;`cU8б.ޯz8_5w=#c"n|?Anh}ڡù-5= 9v\l9j *}^RT+z=]섬U>͡|Nї)ɺ.|ᛰ&;}APM9Zq )Mm^qdQ]7q&3NdƩ̸MzC @bdQ4*t5 F"hA1{୏:XnEQP: j1M11^CfznD%c=GiL֍zǩzbgZx۸_lr4 NZv҃%S#zWڕl5ND% C0;ژ8MT?06Z Fw#VR#қ{mh+bL(L}O)`}jqOr%NlU},csN=;sET~@ {/xr #xRo OZfZ@ %`2R DjTFIl 2yP )j9.&,0HX%I X/4sD](T!$&n0,@ih H}ޘw@15 2IR r*pp#b6 &H8 iAk&Iq5']QbQKülYOYI/ bI}nIO5YG)匽5C}+x{S5'@ {l0{U0F+1W~-ᎼD6e;Jw.rP( g7ɡ0|l ys:y^e:Ћ!i uهfL;DD x1]a\?N p<~G!ln;@3ە?0PRA8bBV+h0yNOQə`)a?מ&(Oނo> ']L]!͗ ;׸Ҭq=Ss5c]z<r,rs4-=h=1GsєsT6G7M4k|&ջԽl?LR`=3&9vǦLJJ̔Ziw*=>..ͪ=nڰ^fNMӸh؀[ Tat7hlyfb_seݗtaݽha[GSEϵ5gWm\t僋˕.e{=#" '(9U myLO` ;H` r/|b;s YLθ[87YhX=XÞOohTF7 7}ąkwoJ5Ew$ĤE֙xvmũgK7DAm$/BcY&+6:ɠi sf<+O?IP>*/M I$98fl7f~.=d5+ܓ׸8l%bsҵR+O\ Z cBx9v(%4׈8EXl_w cł:SAeH|.%&J" - xf ϓrSX2ZF dL'<F-pM 1vIBV$Rq  ʹ0E$aɕ\ׁ2Oxي wfM$X. M1e'/eAMl!0(*.@5 /M*;)I9w7dd*D2pyVLgQinLBL^z,*C7qk #Չ#1k#X#qq &)$(:ƛ{P+ ݟǵo5q3+ ~]ä`4 p!WU}'h3iwr\ \Μ(Ll`+DV3>#?Ԏ8 xm0{Ѱ18of%e7>G`8-Getd`ж\60wk?NEO("r!/YwhL_|4"Mc6]28x׆`Bum3Uh- ] ^̀騯uhvjV݌}(SzpmClGqsGA5S7]՚B?@W~oqz|4'$HTbzPY T{7aլG+ր>~ջWvzJZ l55&n,A]֍o{=z͸XM7|n lN9|߸vswߣ~~ח׺Mܐ-k4 78 J5x487 4YamHxdᎅwXwih:@5r(i?w@1}?Ҁʻe/\R,&{]w|HzkrΡ <5'aרOw;w?O~]?Gg|$}Ξ/3OZQ9nvNX˻_ DӺ#w@Z$8C:-KAH w`7nP =c9-)\Hz-;r vA.o:%7My  TߘR_o=;h?w_´ٯg//>ue|?_Oɝ;~.ԡշ'cuf:uһAݪh8A1-p9 ~h r5bwKX :fmz} p&5ḏ0\x}ίr?YMWo&˺e#Gdz5Q$*:s.lr6"vB2fWbLӟ|rS5mýc53ӧΧA\ &9!%DaDd pr.K?:\SÌz~{3.9njSXR-"|w^PyIi_ɰ6! OkŐrngxRL!IH0O|CO7pUvNDmFcR_oPL.&d Pz{ԥ[vs7!Wjd[,5?W18h <Ď  g1E8` R[39V0{G$Gc'!Z)шp"xeb d\PIDRfH"ьB $, *q~tGE6d<EkGPM>np,rb߬uGGc;Ӿmit^5RuJBxy T꧆qvO18}Ni& ?\7cvox\vwMǤf} Є]pjǠs-c-pNR hfc\pC%+L.X_Ji"{P/=XbDw.K$&/"o-Umh{l9ڞ>j[KbB2UVn?IƒNDAZq3ݾ_$e^W?T:aəHƚ}^[nD*&kڽ[Ͷc; N# |>lr+^K{4u!,n%{8Buy^ ߪcJTzk=n=~UY* '^*eQ4ʏxupڃWKDzN;dvtA Ș8R' (_um̍twn1գgFr0 SO}fMOz\GprԻdRys$)YĜ-G0הﴆ9ѤmA7I;C2K;^-'juKĨј{̒1; &=8('䱣1+s@2orNta uNn'|5 }t<4T-} wI-ӭCuVǎWf w}_d?-F,@i!KX|H DӨkW{Z(,4tet%$! b*DcӊSgbʜbhI6r*%-BcE=%!>tRkb$aBH# H0ZYxBtbCh"abh$ -pAI)E6aabŔa$G=0Hc(LD+bl+@s&X8hr{E%ڧt ] c bQK:7UE2" xМ4`VP$p(UgZ򗽫:zlS/3{uTf @6~EgI~ J-P$%jM4~n#RÀCO3~~XO3%sS)CA,Jk h}0eAQ+D0  ނjE%Ҫ%m)ɵVn$J2.]ޠےmi3&JFdMcj2إ@Gr㗔00*K24L-Ga*֯ͺ4gx즉溍=$bDT~(M1{C\lvz_Oo 0H KwɆFӒz@ ˺aRTq  ;pKb謔9݀ A+"mw7IY[/!8fزhӆbbWtE,òHݷ<ۗ}##Q(BG kۋ ^b.<(kjDhZ2Ѣ d]'<(=wb;)0\ 5C*puI.lޓYdi#!{|o汵tjR=;oA>8޳NY99׌m(W5(#LxӉNה;bP='ZݙQ[5jA}J3y!9g!)~ʳ|l,iBAosS6p h!XnP2#$@/䇶+J![]?dy>m-=Mė1N90F{.J'cB/4-XP4VLZd%. KgV+e?a#adp&F#OJZ95c|ˇx68K f(0Y_dDuR>)ݺbmnl8)=,Zٮ ZP>8'W9T8دq [ʀΩ*z^{{ g[ dWw\~ Og(Y [cwJK~ɖty3dNtˠwD˘zozg9!ۯ-Ǫt4qag) NiYZk5jXZ:UQ1G)l5G+J=Nz [/IFJB%EifmBAFgs%\e1r,fZ=f<cCvZ0RШ4c:us3nh2iK#vV}Lg ljT>\ګZ4*{헳rd3B@oK^׹9?x; W*wz6UԚOHúv?XbsފΝmw޳ͯ'|cq1Ь|y?C"#F?ȭ)j@K_(VݷTDjS&'S"djZ3ItN¼hk嶢Sت{?e䈹o`S3ɲ\!vg2.-@ I\ʔIAu`ESi2C,g<@/I%UN%SȺc(5˗#5c|Ld;Kv:[~(zm&yP"@7;|Dz}'cw>}*xL.e&Gv0 q Ʋ@껮M,{OWOy }H^ cplZ7*韄}*7Ou@>[-pLߎ]y13mHl'd3`ȣ)o^>;]\ S$8Ӳ.|vQIߍ~pK3QayG:cR.Lndzd>ƃ/ "LzCܿݭw750\hڛG| ovpj3s;/q N9 *`Pr>O-]ߕ(|5{> mtSKHL{HjkGqA5L@g?M+$lX$x˗>叫 ۘz Q@1˗TEbOwsS0cА3}e1SYL,h>\>uj<HO/|WOz8Wqx_=nrP(n(H^h\@=܉qF2qS[+HUq&.X"9/! ^qp{)."?1|KF3D)|51˛W&Uyob^Uob.&,r`I rU`l!Z|&sAc'Ep>=1Cy-eb\è-l;чlxMÏB9Ms4ό}^.dXߪ :otޖi@HE:|s&$jJ04!Z ;g9 (qb]܁w`xzn9*j`Π!QQ9ui@CSm;P+uv#nuT_>Ԯb#H*6R6R[1e`+ & vIEu&dJk2_cuz>@ hD#תhA6J^C5.ul cfmȏ'Mf=`śOvfY &2+)B|{k^{fJ-[{* %ªfv)H /`$7)v\7v5kak6Rtd$vB}uIL (24}u)Q";Ks]XbyE*@*4(1=)rS' v`\Lgq.!W(fPg6iqݡZЁ9N-G {d#[Om*NKk9V{yX]ň!QAvy &*h~nK.b}CN81A8x;cmz( ɬLMƒSg6S.mUw2&Ҩ}ws˛Ugz1gԽ'K6} `Gܫ])81rJK{@p'8ruH'c!sK =tYh̗{עs.Š„eF 26UهvЭ{עs 5[%AШWC7+Pֽk98肏kDg"i5,w?3m#+zYHU7ɟV\iq,߁S3f:{tx,?l'+ cLam:-9{8gySuItD`iq^hkfkJ5璛L3L-'Dh&$&$U2oGm^AL:2gctzR?_pq8v7?qp &9ṇݘWû % rq ~ix$| #m rqWӗ}%G@SGr-!ln$> 7ZǁFcMYM5+.ڧp!i/yt Ս` +jjt6؃) 2Ы)Ph;479]ҽY/F, F=:5>ftUw^K0*q6߉{3*Y/FjN1[㣭Aѝz=p|>:  BBPt}#p+$x%أ Qb٩F޵\kݷFi.4èIv>:B&MNK]a81BHhJ!)]i|//y-[ nMx-bjBE8N&pv{4u9% qܟu^ |aBZ+!؉H\aHS;/JVBSd5\,mK#I dg,)s($aV~ eBIm!%z7WT`'(]6^M}GƜMgKl P8ZvߚQm"YmϫYyqvD&Cs`\+rXmn`KvE: qbdsޢ9HC+[.cuR~S)*Geu> eǥu[o`ӥCt6:HHY E$ad|BRiFI:n?BXi0*Jm7F9swڠJ=X=!!'7,ˉ d4k=Id09K6e{E`2RH!CT)F) )e'̜0duڤty$N sGqZ笛5!"4E&eTB9`p0Bd`F"./3kpFPXh!`[c4N5btb+agB3ɹ9S (A*fFq# &}@~Ã7\kp~}n l5 OrToG;K\ږR0܀zbY)9S f p_ SWrEoH\&46ar-IrmjsxϷuzTS0 ;bDz?FGHgŋ#m|3PsvM+~\j`b\@  BTm;5ȼ!nD!.߈59j xn½*J(d1-e(eY)fHd ?ށ Aj47v/H7Ÿx\{BPN{ci4ֽk9Ά)RLȃ),ƀr0Gq^,0J("J]Laf( ʵ)SDV-"Oa{06$=>ܸ+췛ܸǏK6O}-(QR_?l>{GD0)Ƕޓl -e]|t~[ >=/wGtXlz1TX{NIaiHhf˿ ~ק9 4&܏wwPA}+0Cs"S *;CDG_2 uE.6%N]jP~T~NxJ+WB4*X-h`m[ߢQmEo3q;3q;ӛδEOkrC0 RB܂Q %K05*XYt%ooױf\GQۆގ/m`#lsZ;*_4e<쩾xBE̩^Q]sl\lفH~NQ˕Ȗ<?\RT$$Nh0,N8QL J$J:Z`no*S#0-F匒P)v e8s6J!CT"mjVYB|D2 ́ȶ5n^k~3QG4IǃY~~7~dofͺ7nxYoPӉJ j,;VD&.$iH+pkԘ^>RU c=s?[to\E“DD7ޱ{\z1" bTh4ax岦hD䩑VP=9>wnK?{n'.9 O:}>O:łN]SL$B{@p'8lB)aTr FdY&i^&Mu\ +[0y.¹Y!ATH}GgnݻVSj`W Tƴ+l#Z$\Bk!Nt|s_g.[ƻgOxgBD(\C5M4J9 @b IV]<ΖN9P||x\,6(㤸ie[2.F_Ϳ }}DϽғB9!X.#%"8 r6N])M8#nLuC|FQF?pq]X/'D1ELOS6q.˾8/Ն +=9'ʰ%\ܲ_boی 4)TKfJ'+cJX#mlϡYri}w#sPyqu]6[6wjǶb(ǩkhE 3ʌr c\hEȶHcRyG` hle&rB$]TjZtمj&aLAԱodLvc,@S3:NtLW:`lrJ.^{D`[a˸a5xW _b4lLN-MY+CT ѡ@I: Aio}RNʏVd]:ve_]ae}|QVX_?2wn) cd6|RG;hɰW;p]q.k.ν(,t=D?M1}_p~(._^8uDow6 0#Bv_t?G:pڒ:SMP;q9w`;-5vlQE{lyp6G{mGU&GՄϮ_InιU`FP<"UR;y*1TRqVz/ p/W`%goB~~5SEٕLNP5l@ 괯~I |^ /]9``I"'U{t"o,cCUQ!ݎ$mGvסv֍3~f:(/;qzQrBMw=sS4Yc10@RQ(qDZBD !Zrkm~ҷ ["!\үޫTIl_b [@Y_D'm5x (-A*՗Id?+?lՠS[pcd<2}$1J);(KM r޻U z;S{ly+ޞUK{+wz/?ERT?N{Q0 Ť"|^KH#(Spb|Pe{] )"߳!"GˏW|'r "jl0Q2,XkjykM \$剶 Z*"h ;dk$x?gj\yk^ 214|:_8_(49 ˲٩BXO{ܷ̎[0 OΘAcZڏ\_k˜%*h:?ET$crp[!IOrŠy% g0+^:_Et&g0mhl C֮Ink}>{.hM%Z~(?ֆQSwO]D1ݞ)ЄHB$1 PԥTTRL[7_X˅ތ1PQR8w<~w[ZmF$.eqfzJL,|4KhԺ8$R)WRyD*gߤ>0}RUѠ ;bN>4ph Xj,Ǝ*=!2x;6}s=2xEw![b ȞN>^= 9C-G;_ iB CyFjyh0ZSt۱:r2q*yc+k[ȕqK)9oʔJI_{Y 3] ׵ѳ*'Y&0ċv^i6[@MU#5(t`)2&h3潗'c 5 1C(sO3GVoļB?)RSQ_- A,賝UO}HJސN]&'S9v~+TOGC>мO}Ь7c͝RrO0`)`8b@Wf H ğበ, 7wwwh;*԰.oh8B? 0ixh~k/<_Kp_Hu|smjdWGaڸO^_򓽻ի/du\p-4nsM(ZvOd^ x0,#Q z稜j,98V EG$EEƈ$p=R8Hp`ËQ Thx{Pur\Jmp n'PRGKA$U1U!@NK[tP&yо" 0*ԩFqKGPFqU0+fCH$рxcT *B1Q=Ȩt%7Cn4H$*YQMPDiքP9,*lՑJ@*NY-Fs}u ss#R9<⢦΢1z$@TUGhdלxN׸jpR$ x@n4͂A !Z#PeF>ZNhPyiV@$PX$\ԓgsO6Cur 7hpqBEOTZϥoKV:r*PLqc/L`Я#}@2\4I"qB%*m`,VATjf }:e &!)DIMjnLmY9~sVУֶҸnX/gPAwEщ֎pC_>bQ~.晪ֳW"+DE@%B `8Fj>}}ntw6]ZlM[{w>#VGaq-z8lOoɨFC~ɧaػo]/Mz4~c~}{:DKz?s1w'~x>Ox{Dțw*;7+~|p|"p m'D<{_(lu, ¶->0m{IZeJbj.mݍ[KyQ/(Əh½zަ>j4[T˅x]%f)j_]Гtp뇻8y>^|_.7 T_syꗋ70ףx(E? `&pT >7% C3SӫOy{@]#_5WhO3m^/<0V g 8՝#OkwyT֫y}lz9o׬-w<xFI? } [7_7]#\ؼbro6!F^ 7 5$F C4"!JtY. 6@ s~l^'hLoqr3DΊn^8Fsbſpݷ7g۫! ??Fd >p<\7ϓf$l`2tx 5Z>i~E,ƒ,;Y0ke+tcX5x,4f% =jaaPתtƒ|,\0kY(i=#󺓏G*n'5>2G?~|s;Gh y=|>ѽ8G?d:u|ѽ蟣q)^hxBCдYOR>$!8{V̗I#,}OϙYų+\O=m'Zi_ ?; mWhQSq.]MԜ0J)mWkcCv^<j'ZW "]a<%WrԚZ,8ď$DZS]GqJ[-"x@轧¦!PrJ"]rw H.c` Yr%WΒs^;K^r@}=[̓|:?O" 2foGkHVal~Lpn¢ĝǗ~?M}ίG\SZ|(8Gȡ DŽlN!Rrh^b0vDH Z7u N-_^]NQupz5-8оKZ@):hStjN8EG C>)mבofJ&k}?N\ThetiDx#eOB]<B*)hi]޵6 tYb5)IS0`{lUeQrN .jGuS[Q5cPJmSes'zЇn,GHt[ 6'q0#'W?%4r,:^p@pB$a*z-PYV<$$?n9qHKˆP94P $T76Z/kmH,޾wuyƃ @3$t/M3HǓOQ%Z͋E`8Y}}KsmpӔ 8F\'-j򟗅>%R=qeȯC[~-19g2!+s_mg9ï|Xr:~g+,&A$<G9xFIE2S21__aWJ{uy29G lϔFvsC+paQӣ{C{1l(Q*hQÃxCcN}TNdD~j,c0BmLsj#z ˪cEB:"E6YL96IL4aRI[%LeY4O)HpToP)lfnhRӝYnNy` bչtW%G>:WDn<JhJ%ZPG8pjZ]U4Gs1m<\Pv&AC_ĞU,-A |¹ &6&68CKTJ \D}:ʂ _~90T.Zq\Oۿddl%=úWŖ YEp 2ry/8IqOR,Sp\(U@`&_d*hB7H2mF##A K'I3$25mA#G I6яwɉ=.Am&]"ve37>V,>Dasg81QnuA`Xd]@z2xܢ.y <.CF{n ~ݛԋ`P -C 6pXj; tnpg[e{\ƈt5@}ߛ .wljRl}K̟-7jWk`y7MB0P}%\ +:h@c, e$1c2c*#8`0`:+JV|Ljaq Ɖ="9Pb%g;RѬ64hby{NsV jȠ8 ?8%*JZgD`˂ B9JA:zS[4YOϟg '4>OrA߈}*]/-<]6QV=V !k`q (x*e&U2&'HFL Aʥ %4"mA Y97Cl־JEULj4}:3xk@\>i) { b4]?s7͎~nÇGGYȞw ICU=gI= l)* }P}6`lCZQgگ#i2mEF]3g_= .QxUЛ"d0%FzJQN:Aaҟ钮xp9_PG@g>ϺpB18 w×&4&h! \07ӛ r18]x-f+Zef1[X._MXа 6b0;u1gō"oo?#0;H~ %1e0RQ0z%.|C]i~Pi*͏_p~&.O(Kns֊d C o6D:l,fĘ`n*YUX'ϐs0sU˅2/M~yYm@8ci)H8@ńh覷 ,V KSB$@8#!820ɂXI͘JTb塕 x+ozZNéօ o=Y,2INԞw tRt?`*iv}n¹$kv1 J ! ÉCκZr RQ}?#_3v`ϸ!š=Ouɹ{wXsSw+‰=>Q9QgKYz1%brHNmDH9KUH Qd㴉Ԕ5f -KQ_ËNxG~k~.Sz& QGjxsgN^MWUc sut-EWNW};k¢Rwޤǰ`C/ao{WL[_3bZ5kT.FH=IcwF# EB t$:x&5n=cU&Y*f1P peZ zG M $*wrR;*SPCUGoCwf6b.!pv(NM2y\}Zst„<̏hڋo<ͣ@ڄ;s:i+x}L$8E0gW^DtҷܳD/(3֙]=np?gKg>ަcfHnDEutW-4l#n4Ljௌpi$KyiHz (A|ww&FF;j!YdQ9eWE9oEчuDVZ)]8Tqϋ`sf㾫M حhѺ2ΩQj/MQRu:  ޅa6sdCSК)s-H`g_3ݽ9L'$ yyfI3* 0Ksq%u? Vk.Moù`[Au ՙ{}E :Uk<:wj)j{9Cp1>h(1m3aLQ#EՅ=?RwzY8Tp-x \q0ٜȂ)Cjovxǎ7w4ROOssλ[<\⧋mѮPRۜ Y1q iYfUI(nm%*j*3e&2z-+{/A_L+/nŗk~r5])[Sjl}4kmX6g_ qÞ8<B_mAV)qx\8ꮪ֥r<ᚣJsR~H߄%w\ISxqx xo\0-*:` ,~&_˗I(-"`cvo{^̐=&Y,'9Şk4; k&]f-FwY,cI+Yys7vц2`xrU.OpO##}xևKFU7M/6 CW=Ax訔.*j9OrܖIN ,lYhTITkIYDȶ;6W|9E'n2!B;\gM{NIlDjZfz{1 -q#DSLb-LHLK!z#Ѹoo@m beJHYNˌW`ta t #(AQrQ|p)9WcYH$ȇ0vg4 c#` 1ˠ@ D b wL&!ʹcFpfU͐#"qgi$%W1 v+ʥ+)0H1^jMɨ%LU"T(`J`4B9 IVFEAMx&aW*echiKHO yՎj@h *kdd$Ҥ0 L0:AwDS#K0P')L#B;m-dQ0Q Hd.޽k3 l6yc~~NR\=s_c‹%ӗ4QD>=G}´|&7w Dp(}/?~:[0LJ23BMk38~vO F,^!S \~AlBBj-A,_3Y~/ d DI 'uo`/~?>z]KwY91=[<yQVBug>Bwl }#1 9#u=r]O @{ב]A=TPWz;ꃃ[jfGMrې;;n{;7/\{;7]s*t9*J4nqRZG p ;kFM~,]y+Ή.gT+:RT*JypsGu xq~Ra18Κ -_Nb 7sB>A폿|pN'),__q*𫕀@SSorrKfL'c~qt4 $^TFo5w8#~87[C$łGoŏV &b!W6"tilE uZӟtj)P6m#oNcvޭصOj*Y:={:3X m-o9\}Uo#K[Sy?x~I 4}'{bܶ3_Ք+WLsMRтD .\ &bo_fRY3ZZ x[V{w`_$<9/xbOqXr^ez:8I+_n?iyJQ,Ojhy|w?]+mʀMb[=YvǢQ:Ǝ&fnFy,&plO9a+ފ._^D1=:S|8‰}ts.|=^W 0,#t%m q;і%MlXl8tS&n b~?6&}u{vl MHxC b4| ؘltO6R,Elv0w<96qH7Ypi'T`C{rꐭ2ُj28cƦ łY )"1D 1U9l`dqKV[ڐϏv3gQŐmLBSed '%ŁU3iazkFC3^ `}3(ʞAV[lt<92Z(AZXxJ㉪\elem sxmZ' Szyf /*>l2A!uЁ:О RrqY-sbf괡d/g`)=TC?k9FKaňCϊ"?֤ul?|uF17-h2ӇO=a/xsC"~0Xwͥҧ&|a1F5iS.|HҥZLQbmZEŽqR0,y#"!/G ͢F{a8s%G&ZA(ZVrᬣ^ %H5BTt9zpcFߺ5UUPokw [qD1 "YBq*R4bX ə:@ԁS?F+SOT-)!$F{@!WNqU)(5b8J`5iX^·#͆N@V)YM P龙FAaSc3k ɰEH^:PopTuFU;2^ ל@T9# %q x ( tUw֮S2a8aCcbc9Rԋ3$'~If]?l v']RHQІigko/:$!!]Q*6JS]$˵K`, 1  %+S By@9XRI m fT8D4QAjaM{`A&l(5^,,_vp/(x߸I$:,`I+ T(u_v\}$\M[fJ32__< `QOON.)qA qaAUk0DNd&aU #D'+ cd[eN:^&0Q$Wl.P>l￲5%H %(JN{cU%; yMmHQ!E.&(l=G}´|&p7wQ \,gG!oo/@OgxF)T?sm*+3 ~zLD6@K50E.?=~^QT .F_}zK7#K UcjTimxc&b`ϭ$!.$4*74h2[^Mzav?)gK+P} !2®s=w?mz;R-拘+Az':D \)H )V 4f4ovO-NӁތ`#  |`Ǭs4*~ :?^0\;<ų;a* v=m&zҔŇp6u\~z [yKl;6:D@Z:vnםoi":u;H9-M>9c?{NY/:/wq~,*Σrc;;JwbOM1:SWv9D0!bH4\ m;߶? Xl:o onQ.LZSLoq:tuvSLj7z/NA8XK}D5pН':d :]\׫SpDkn:-]w} pq0DU5*QR{epӴ-co4yS3-}"MwmP5_A8EKr, , +%^$)Z.gٝYZ;FFoo%7np]‡URIu38],y}I1>;*C$uܜyђy-2W77ora 9wnr\+ves!L!)?c2r!<LA_ndBP{ݶ"u[zx;9W㼯R2&$cHўIҽsI^6~+B!&,0fɕ1T0D!W\e啞V٦=AZ:* =E"BOgfTKYf"lK &GUO} 6M"1 !(Ն[J,EF(%}&%9 )L"jЉ@U#BKb|IYLXt= ^' gK!%0ҢZ_@TD͎ *s+_)0%nUS\cvh[o[-(JW/*Q&ro%svg|(cAI)\}wBJVS-fad XEb$SKTtbNᬊNo\u:/_y"13+.2EFSk]yRzMF7S^I1˼w>4D&﫷n-xqԽat/P&[Rt7CȔ{,%OFo7pL\lumA&I1cwCظrc G v&hлo|=9?YEp6pg3J<j}',{2etS%TlrWcA;ztׁ[.QfA0!aH)=_ }L"} ԓ4Sx,}FhFc.B9x@udt\}rqH…8SVrl> Xp},masg3?J:J DŽmSsǦ䏇QJ*ړhgHaD*ړ-L,&jd.7H06+.g M2bxB xvȶf`UW.7HgW_z!.g'jt\9NG&a:$F0)@w3>uɞw%6%3j9BJvrusܝSJKWgxK6Pg?AZ'TYJyW8iveŹ: #E 1وȷ T2m72 Y \2 *_=p]XՂ3u0 K:Hk*W>SzTr&A%;IBO~ݾ*#h8ؚ+_ hيL&F!nV6r/D$3u?i| xL{þS'Uabn|/\&=x&>Q,$jg ^y~.`Fs\Z$85e0&KC7pkHE}ke6I:.|aui}FܮW)$sK{icqߝާQ؎"D*#QL*Ӿ SͥU-KD&r9~[k_}%B#~HӾ.wiXS*O&ﰕu@F|EJl5: x"u+y(@G0CV]l=UZ=ބ`W~w Պenjm2 2Px0 qlD/l\X&(OKCʗi`RdBX1#Daɩ/DXc$h{2 x7ósqg3Ag؉=gKUMD[//VT8.%&lg Akĝ(tym9piZ1o 2լϮf&RSqg?l#7hGoy*y-ZKfL25H7iCO(WXy67pM kyn$Fg@$+vPN]4Oiz,*'Dn)kiz@ X y~ M'aP9ҳexidkx<B YS*;Gj;0EG]ٹ[aGE&mR8Ķ 7Gt4PQHP [b'cWv5gWVHOc,=){آ[2~D#M~T(C ̼CD0ƌ0J8%.xzxR9\$0󒓳&IN]]x.<] t~oaOjqКV.q64jGCoeo4ڀ48i6$62h`Ѱ?^lǵ3p u޲3lylmi ~iD6N?ʁFӉzAxQd9ybozdxn\oN^.N&dsdX36Xa0$󌶑GWVۅL=}gca3'uf9K`q̄ŻY*Rc=CĘ49,FpIevYX # nO'fE N$`3 =$ LGhI7rM7L  q^DtqqǤn3`Ա o#\/нልo]L}޴ɶ{2[(+!TV[Ɇo:r yfvِW~t`ۗ,/(Z+ bA{ô'yB` ΃TQc[N:=hiVV9+t|VWrgw!j>Y{8% !Tuv Zqåi3Af+BO={oEswy_q @ }m)gWog|?kv;~YmgM/i#v錠eH9fL}i+TyweV@kY)`Y% P,~cQNidMu!xftz4iw;=O䣣m'MҸ`pq2KR +zq߃^q|Vf\.O(iwvq<&?5аKLNظR3 7R.?t4HE2qnc,fMӛ$R1&6UsĭOq/h; ~l?9<}Rj\x[oE{<}  _^w*|͏/dnvy[-dK0Z8oaKpť4"j)ޒV-%dЂ;9)^8B%\?6iB7˟\h<|}~;^yj{^_yr{n_r˛~| zkի˻1o߿~}|._v>>y{y_OFvឯlz[ܜ|zg>}~{Վ`&6^~1{'ɓɅ/9Ts 2_d?'Իw.MylFg77v'? z})A΂B;=aWMiFSI0u-|.IfqQ_O'Ioru;1}<54)1%>o-Io{v|6n09ws\۝ r]ۗΏ/`?^GAVO6L1~;zttelTM6H1A]M}>:L'{|Aoԇ C skw8K >ct/}ov<pVM2ݧD{@q,eXlSdU/f0"YoN풁S;&ndA M#E 1وȷ[ҧlap13R"UQgX Nl 3#iFW߇oc{1X L;ټ0]ks6+}fǴ;n٤N;&FTJrz u%JŋӉ+ x<S*'9חe.BKr}_ɢ~Co ӧBKϧQ@W+2dNP!޽PA#6HfEcfsG>C }ո?fWb,+5#޾OB]摚&;u . zA^]|8&65]{*%dŎC}J] UeE2BMrd EC `8Q.(O}nֺy11_]{@D9988KKm8痗 "bhnb`.T;՟'@LA&1D,W\hd@&[XEEyzw*L>iGSsSy$!&" {Eܜz=ӶP@ ;}0Iu {~2Ih[Yg=w9iv|߰M5 {¶g=$"|&:jL@vJP:~vO,,.Z-u#&R{8j)vP{j-V13$xu eՠEZ#B\[0hvN-cꋡ٣D>VN55W<FV-B!$N*/\K#i /T ^V[h"Cst|#FԹ( ⡃? 11od!,W˲ c*E;Nߍ鈖仝dA3z2vİݡh73I̝78LI3*l@!H F 4p&`ޘ" ,fk'; \8 g'\pB E PX|֐fPB]=@SvpʁiD!\na"9,AEOY{#nacanD㬭$t2n&ΨSةy DP [*L$L<<#I$>kKԾl7v&9@B_HbLh |,O36(!v@9 ?'`0*0LF8zz@|Q1aKTg/U4_% a0~b=MAìa0Ka8]&2/\$baۥH( m'燋aIYQ1fZ3AuTmvOFz $~!ko-v~j\kr|^z73eU)Jb=I+f]˛89S]p}mapL*'VC+NH4TYDmvSC'*J]PgQꩳ#0IBB6 а,@ jI%sʬ`[W1-S]S ]*:| 4A&XD]co&480co^3'(%8pp@)2R*o0ee,T>u:ʒkLp+pm4(3Jwkocny4qcmuk/Wmao  M2e@(,Cp+h+Bko/kԽ+"ΘJ$c]4fS& o غ(S 8I-_=98-Dep ϩI0'TW ;6e(5(s 2>g6gx|Λ zߴa?quαӕ銙Kl` sm LdBY. /hh5ם:Ik{h5H3StVMS?j-26\8d_Yȵu1&4W 5 |}':x xZ xuĔ l^Zr%jY5-PSǵ:b+LX?LY-fuy *uV(+rCZDamdwnϭWH{Vx96Hw-8t7FlRs\9m Vi?} J>i9Zl-yRZFs}` qiHQ2͌)Li%T ]0[!X[YD?=?\=CĤujO!("ۍt1ѹ@XFU8}FԕwbQ @zSF"WU(Wweʛlvn'^F>v.6f,yb1-k͒q'}PqiP5P!^pyfi-l"b&v8(_ssgv׋OѢpjwIo dL Afg(]pmj~LiH8ݛp+'܍[ KDIDPR%hUn,*Bn+8KTWj <@ORQpchj1ьfiby?\yYWtneUsR\U\嘠Wo_}myOr;]z/S/S/U엓2Ct&Dı=J>F4mJ:T69L|Y=dSzFfw击 U΍H…I59sf2 8_y.]:h|BDB]ݾZ&] ΓZ>=azK8t8ھ3.w\ AMrj 3kΜ@ ُeYy݊CƜ#!gky9l$弽p$҄NJNJʄZ$Md)O?9 ,XEg,ޗs\Du@{\#sm'[x{I,E 1[X)ҬE|$ ُ*_'!CNisQA"(*I.)\,[6Y\QDri'O3P!* 5KiFӾ5_8 )q4ӡRvF56I=~r,rQ̛M.-6Ŧ;>\N_BN}w-nb(J`8W'8hBpL0mLB{C-FAg߯Jds2cQmS `t箯}h<]vS\jA`eS xvnFizFCMGlxc>߄:%;)hR;oq_pgt6fmLa׎#]ܸ(F(>yLtFO&I9h4bHl+A4!(:si7܌4pP0 9.3 , L`|4m "PZO3]CZfq!& ܾ eDNI qR K _a,IՐFב uP8Reܲ 7)gِy:_y2G,)/^^ܦ:%hb#ؙhs2C50aܰ ȉ!Ac[K4:HiuI+RI;q!Rd+,^t1"*ʲb#cPԚIa [XX՜` ?|tS(!/B{ϱF+w%ir-/ Q?a@gRShzcG^;{[ni9Q(];z{RUd¾Dp0 %ia0]g1=0N(%pd :綮 3X1V^.!sLN"pr!{GOÇqwiu0EOd}Z_WIe/ É|('W'yW _f yq~(rw@|?, >((H@8y_^yMsđl@jf~m΍Ҹ8Icy Ru;O"B/3Qo4o3q/o~r֋gC׭Ǵzʽ؛,X*7^ѷz ƋNܟEW qfXܝS$Jbtu;Og'%'wS;zST&'p/Ȳ{} H^'*t🟯/7aI6ύ+`HIv:8kypp+ƬҨ?nA*d^<(o\刺QFot]JМFW8),l#Nu>qy8 >H{kQͮC^4xEu2rI)@nֈg'~Qjtr5!+$G``E-z,,a+ѽ_m}N\fA0C`. &l=MWbmʺ*{0e a/8ܵ] 93(Txƙ2}dɘY% Vט5c&؇w/Jc6Q}| /2Y[c] AwV~0HmR|J".Z܍vJ)/F= 7"LMw&cSb}(ہL_ t !381dŽTDK P@Aif㤏֨m"(=ٶCdyxf^[q!D VM˽޿f:[#:(8#~CmBCvgtQGXjhaZ%aUg@5jnFN`}-׺Zº8k-hS:K!x tQ羙#X3 {II?\Z|S' gbw0vlcmV^Un"^OaCK{Y?)uY,& U+q]M Q32quABĪ:Xߏ0eӓϜ)2lcqؤ` YbsȚ8[SN@Ox\$>G9|u<9;vIZYr'c%e\;:{Wq s{gXźJ$ŕeٗ/RͫĈ"Y([q_..\lْl===T˙\-ngbYcP5jeIu|z: r7' Wҫ,S׳^ j<[5C.n&Md1\0J)WP_Kku -oN'<[}__OOIlӫϯso Sprm0F C" k JZj `M5Js{6\KdzwUٵ_1{M%aKV%Վ$-"21RƒΤgԸ-4pCm[ kOyTh\U(1[Mer*۹OޕۋQ)M(6 \+@!yHNkZpɤ^x~r x8ie .P5gT${>93-3Qnx?u11H"D:%7u]NfBRBpb Q]G0[=Hh̚Q|D֒D6C;2) _޿dwV.ϑ..ch#_m_.pA/aGkOAw7-3NUCGڅеaWM@V'f[kLP2/q%ʀQCR@5%AH@P$ 4v: u::W!f]++X{TCYqd>ƫ^#۷?J/ːOo0 /EkύQ1r낰c/c^{sT2_ ˿HRe)4Y,M5?|">ܚG+Hjf#ʫWeۗOz3WO_ ~*B0)I,B{ ŘV'}x*]LQj5pF3,V|RcXw;-%xd iɟOQu0ϧ[ M̬_.bq& p!Q+]\[|bR2ML*Q%= YX'uKʓ^p@ȸ*q+6NY3Br&/#SmG1Zd`\jNE!!UFb5|6$+ɝg,ZQkoމj'k酏NA8QhD!$#S2mIlI7AN'RI&8J&Bp`(eC D4v"MuӶX8H:/[GI$g,ܞD*>]Q׻wt2fZ?[!/]{s۴I>D)hB"ܧށq6Ea .JA1 ժcHR'54B q⸩y>u VGH93tReMr |Znyc|d1nU]/[0aL|\׃PdQ((Y(4!-sVqm!̿Ddg T+i4*֖+Z4& &VM+kp/-+b!^J'7֗G+ӧۅNr֦d # ER(JeѡjhyG@II<㙓DDGf\p3K'f3؂R@bbhF9uZX2*iZU 8D䊡%ƶGoANIp8֑kS# r.fuh$j:L ;E!Y-I)) Az"'"@JN5GfrvQ|QnkD2(EQ3>IE9goZ+-fwU D URҾ nIRfFQnl #Gg?M?^|^3_E_Or˧I.ѵ0w{{3=BI8_}y}TL̈́X~m?ߠ?T@Dũ?3WS4o%9MQ3Jߞ|>#aRP!tri狿,kv`C\g.x9xK|±ĚCq@P7S3nL+'mAmT)\N0ԲFJIDI]=EFNEN+tT(\iA=vpA n=>BN=uJFѿܛ xvh/qR?)Mn*̭;??*-u`[{FG\*WUK: 9q>BR:Ȧ0A㆏ c[ڬ7P{+ȇ9Vv A(,| <:\ۚ |L}A0 z 8=nGIT^I*$L @!:5( "!-b-9V:bXr93ƐYȽVQv! ISSQ .D9F샜Bծ &$й#B'P# ;(P%v$Zut$U AqWnY]DG9l/FɓRDJi+te(m0*ј'oHܮ:D) >$KZS1 ڈF!.qKx }3/R WcͳmV9K|F4 jLLT$>^\^ZI%.ZdBs ZŐ1lpܢ!"(2o?Rhn!Ƭݕ&C܅G-3aZ'*K=mއ$A/DϠbaG[١K9}$rݠOO,` :JzH"`q'm<ԭ…)XV\Gq8#e/d;4ۖZaPBg/?h:زЫvD [@;}/ \͗3bPCHR$޵!zIB 2Ed;|h02^E"chDঌ+}YCQ{S/rҒr2S\*6tD dˈxU{NkSd0#ۉ=4צΝUN`A-\8D/EF5=>˧]IJ!`kÙy9%h>ߓ;@ Gg<'-afݾtn;^BYvk&3Jr e. '?k{N}zA +*ۛuFS9ɈjPJif:ז5GuxMVr/+)E4lw,|=$JԨ(a\yGB72hlEbȣ䡍;^-2hiP@GG:@RxybB;!=j4nS\S3 !i bYKI(=%VpѤ!# )~hQf@hty2b!K#uHrY3oXIC n:/17o)l|& dmz(גqH:Q0#Z SmuKmE߫|; v@jro*w!61ӹlDrZF^wKN74;3mDCD-W5sOmRܛx\cL:TVm ]nZ֬p}0n߈]x^0fa!#ͤ!Y UJپR+w%CxݲY/{kw>}?_v2n-;zxW'o.މCj5m;99I9k"oDzl&c&8J)ʹ{9n?zMá%]3CskQ"|d=< K斖Ck=aDAzR6*t3QQ`C>Өp9x탞]v3%8Ög ]&Jp!7rU$zQ|F(|#b{v{OP ʈQ-)W3`RC:֍!&8-Su&}nzCKN`^-P#柬#xRDRen/>+R2Cno]D6_|Q5 ZMGoyoky )C6%զ4vB'K-fhK+]$Y1GBnr*he` +MC@sO/&!ZADyZIb5}hd0 >.s8QD͝=iE+1WO[x"MxۋTvįiQyGd/@ By n TlZ9 I> Z;P;R/Yŕ0/M-_3KcZ{#_eɖ7/,u|!9R={-TjU].`j!/$YptL0*jF:H7U"Z61^fKE|;4>rroZ"/l\赗ɮݣFȦb$uî)ZFt74fKdnALOpO36E_Op Ӝ~Idong"a*핓PӯvFyBC^ǸSxf.i0OaVUpOi!9S,_4~:7íARk:S vLX.f 7dg)u>LSh${ܝq#`fZI[r" VkGbD'QGy8"*+lr$)ƍXJb!Cz"Iyގ&{O'n}lPٙF(n!+uP5|+() ;`np&5mׄ09 %5Y !e [ !\$ "U1U?=9w-~ W][H:{(Oۊƀ!`zWk٣nTjYALAM-Vi<~ny&􀮫+-j.1F)>8H10͐\'q#'Ÿ&)p/ C֭Y4gAqiLNCv'QqE+&tM^}ydntR[z0g55(/q!yeVqr! E<ߨ4> 0kk؁pN#d M@{"b5k`DϑO~r:Nr+%8)RKjpM\Dt("d.1S̯F]ž? uz+%{pbLkbA/ zֶԌHoCbkm[BHaN25DX uDx`e&jxe[]ST 3BY{_l̔tjz7z/Ov7ڰSZ6 ><9"%ICZ1%Ձx6\Á2YEo:ΈN~6`iN's]/gRV?9qJ wde#L U19N|+*Lu~Hdw98\rSi* +I~J]Ͽ 9["Mn4丘ߠ\/bBDR៿V.s%2W.s%^Izx] 48?qF @sJ΄Nhgx0ڐҎB0 o'{~\CyC~sla)Q[R9Cᅇz]ɚG vm} T!v*_LtWvJ"#~է/!_ U(j4RԠy̡&ѿA9֒G]/`mxo9ef-ّyq EBI'nZlT+wP-oQ>hb9BjyWvgg=*`0LJGWa(Ҡ+修ڌBȡFKE=n:2W=g'vm7S(TRa-NUϨl֯K[&J`ɻn_eRT.|4ng/ݺY}؉8|N3+xYmac;>܍B./񳡍D Zq&B0&dwAf T6wZheL Rwkcњám;'D(k $VC)߅(H% sk!)Y^~%ͬ~(cqn8RNЧn0EP U)} U T9{$}H\)~\˷;+|<VZ4w=' 'w52n\̕}u:%k_'."LfC|D5dRUYZջF mw' EdY֠M Zop x;TPxSA']>Tx1D+NkM1`9ba&_#2e -MK-gч= KdeDqD4vwL6W4ҳ E ! >VNleZH 9'r8}L OĀ0M { c  ::-wpnSv~*mƘvI SrhM/iRh'촪ܔl}2OɴW0IM4.^M {Ry,.`i|N@-ʞ}ViR(5i.  Jy;AݰtٙmnQD =#%lBc#BV1*B Ђj-SpخcTi9 5}#_+|!P\iFMhj'd(.iZ[G*yGڴOK3OӒj3N8eƯÎ@1=V%k[};uC1uS-f:v:=u)p۞NB3mmjTwAܐ\Ov<~f`jO'`\&K+Z^'-d;zͅFD|??_o<_^x(~{ JџU>.fHX^{~Huo|PٷLs~}9ԛ孋"%5#ӥ dЎJ *AnU:PIdt"ٿˣFI_ڮ/H3]YgPr/.yk1Bpox Ja(e "7@ƞ$OjFh+g®A3v@r77^{nf}bIT<``2x$p*'_1-mA=Mn=]}]>U9|0ɧ w+_\/'3_koO=gY\T?ޮ ~\ L@g7V&1Hy1B+PRVTvID:>"$G2?ܫeYx_jce~ߪM?}ן/_-7`woa"zTz۷q拇K&߾=ǟj댐7_sc??DjR i 3O?R! V| CKq\R*BY#;m~ja%9pY׽)QLX߿Owp99Z9XRٸ?R l1WjwT}Y-(%5 vnjnM^_۸anè"LzJ-ECXL_T0R2.F/-Z^OZ@ 鰐?bەڷ6I#BeDb 5̄u:`ĉМV( y3G#}+0?c݁˗=dt 8IEe-Fh eQpbՖ'bDor!S1Œ F@jհrIzwHV8/N?Bi2swd"TPQj4^ْ17XEW0æ' *"JnJfWggM'SB{vϡ,9U2 ^Fy0Z]&!I8WZks 6#)N"FIEG(KB0aq>40nADo% *Gӂu΀#g2(  >DUm)xʑsnCĒ*0=u^zC5 q@9ÄiLF^%V8LX5`]!-;:hu~52(2r՜K `Ai*(`\RXs%V;)zuֽ0$5eNQ1$5h$5@6|ӄf\Ou&˘|87 鉀Ґ D2OPd"SIA ; ]oIrW}I60`,}څѯ+ Ii O5ICrHpJZ z zA"zӒj|SoXt/޸:bRl8*@>KҤ0xR5i,J]d-Mo!2%(^Ҙf8 88fcQ9 n! 1,y\:b'|SvF)7xA13;+ FWP~0rR9Z28%VF\5`HB/6 >AZ^!8)(gH6]R(3O ;F Re8T4B:<`KvUL6Rn8]BV`80QMc)W1ǩ(K[NeX|c-@ЪQ~Aw aЈFYQ&ڈ052Ѓ Kg&40uٽq0ۻMك^l?RKZ o #Iоܲט?"V Wfq<*ݤD}_3fmU,0NN?l<`8.t1͏sWbƨQl3g}\`%[q{unQXI+}DcDŽ7=$<ǐSؤI4qs@ևyQ]b;x4p3>yfD8`Rm w_ݹQN.//G1! |},{~.&_aQu8TlVٵsHO\t}2yXBC,T\`^8=@qdmT淉6&Gu[G "jnoŏjhR[d(B=Bp2-M`s=)!;o͆* Bal5c_RCB^MGaa,3J'k٥߮QJzsWpM姝a,y\k\RI!=$N q欫#Ԕ \z{ @{F(`e+vl|.x~O:Tk Ē{&|‹STpܯu&>FN٧ 8Yƿ{347sfL _`a W\-9]R>dLO_dp3cS- >/rV“%Kfܛ!ȖKJs {v>S9֠cZh\~,{E0BN26p#T$x5:qXZڧ1'"i L(#24LXOeHmy&~sڔV6ڀ)loR-@ă@]hTĀ]5N5hU&w9I`C}>J\ NDm;k}O]xsUWs82;FhO-lnƅ x̠갪lűlkdp&ѻcyf1:hm*JTDTj3ouٯ]a`@%jyGIܖ%%2Ai;]؆9ìoM߀sN;js@ȶ - ekB~^Its -:}L'QD@~ ڄ"ѻ#̠i{ːXnyhUģa8|Xj̜փIUʕ"M>DIPXH;ň1|)koLrY&qiL vsI>y 5`ʃJ`;W`=M<ԉm?7?}u dYCbxSUIgTlP׭ zrm؝xדgY5̋盞:+Ժ~bTm7]n)}V&{M#(4,3Am ,6)7۵m)zB纞mܵhS9BI[æϑJ|[0a| W-Y؞ts'+˸:CJZ6= 1YP$d|zA\Xqiӣρ݈ Nb9Pj|9p'Mg8 j' g\)rDnwJ(QaiUmXaqJzOfwZ gLdoOc+Z ^%{Q*tJ,V|~Rq\Tl*cfCPI-k" ToW`ݗ*#0Kek bX9@Wt)G,UN'R% ڹGu `L77~hgЕCIU?F!" cO3K)Fl&=H@C ^co@˘\`NPc=ZjlcA ɉ`(Di#hNhz/6Zӽ]ԂӁ# i17VPe6 ;a1:PCՄP82qֳFXLʴ56w(PQ+G9Nu40UB8*PPTRZk` E`i˸ h8qԴ:D!ZrFd bK+Wqה6i&pCZ@z8nM¸pC%nڊSP݈iS F5LF7k=Va[v= 2+AOy3!m?Tn#yxnyxۦ/k'@A=֋fʭsk#p+f y0g]vxa>zɁh(F(ZX@üBNhx1ƱNcxLoᣟ/k@1 B']{ -ٞ [;7:er/ڨ\k:pXN.v,Gelz|X[`c ií4p"‘\H")^c#Y2-r4pRHBkz2JzeDd  I״O uoe`ɥsYH%U|>ڬA{G~ת8viO2L4C#ZҚ܆]93ӯ[K 4_9c_^sm|xj^q O$Aq&yj]sj: ǖnt3P/,2ˇYLM d"/.~3l67q=yxiLr󅜷kKϗ\>cNy|w~&·d#@*(>i}?/'#3r9j6O/\z MX9OiHB޸֒){ncQ Et>v;^/Tδ[MnMHZ2uMmR[] RD3h"Oyw[Dքq=X@ľ :JƧҏR꼲-xeIG!%:,B UnPF^*Q5m]QFl^oKPn C&l/Am<5Ȋ!JK;SHN_8I_$]6'$0QvN=ŠG%QƗ>1䗃@ m~5Yvm+]F#fZY/E5DSg"l:A$ÅY22Ot&Si)kmZ3;Xq-¡$ږ"c[i& h /S|?Cnq'q6sI3<,ETv?*.nm˻7bH#Rul/S[}+w\"N;j HȞ"gô(v8-~{ j͖g~16H4;fbE&Iȗt-z/ v #hR@ymA%k&yͲoo, Nm-c\sߺr15WVIIK7;Pr<\73k`+Ԗ3w.Dbj7/ l|̮!n]X!$7Vq=3M"II_Mf)"?-=>\{-.*fg>'3ţcSRųk޵q$Eз=2~>-D);9 U)jP=0V(69Uu&/oX'|1WUmO`/ȇ-gdrTeϛ{VH\>9OslwNκom7J>t+&nokWX%"L="(X1AI` b0,8[#_74٢^k.Yq.\]_'2ދ>'x `\\q@^[b',bNv9(z*q* O_}q枱WWOah[V@ q#iy@!Q"qQz4QBP? 6*y.9ŀ.m1 `HJRXqЭ4I2SM RyrFhKOSy奜ڧHy2RE \h$H'1%v -%yX`d ~JR 05ZGypȂW: V݋fJlg_0o+um29l)VI ~+sFt@X! !:| k/C}~'<2?1tW;1W <\?~| '=x ▾J(D dso/zX,׌xXo.[<@\kg-wXO !QHS7os" f|y,ݧgP JSh&3GyY\*tÎX`  3~|6Rʙ'CfVs?Z VFQ#OY4Y8IYp%\J#a5͙&UqP^1> ZxZ?C]}l[]qk'peu&~ 5" k7+J5;}GeF:6P,EJ&VHI:kHʇrǛ-:#d[`Y6El5MoXٍq@̮^n=.w7:ѧT-I?G^~Dy#uC1`7978`G?O/>dVE݇ߚV} OF_m;pTb-Ȕ\Mx֣3r c/JI$X`aA,Ԝ:L4K0[З p=ݠ>0?)!ϱ'5(|-=GJ\nJu|tܹxhEj%/  z|98cfF3f,2CK2lWd7P_닥h R?cM-y 4uB9ǓM&xA{Ӕ`Pu(PqHКy91k%[,X}S}a"C1'_dGX$$>C^WdYJ_5r]1 790_~P,Woo`Mj{¦Đ2f3zQˁ뼀73+Ng]>ɔLDqJ.Kނ-^νS۔]U>*?{n.lڑ=SwM]*yJGixX?+zQ Д^B|rk iR|l7U'g[s7w~ w: 3#!<*ɬ^)5jFs+5,˛=2b<׋Y* Xox{7 dӅ^?E۸Mxw;֔zddLI?Uߎ,+ Yo_i.̂D3@\_ɃͱZ ~QM5̅{m`&5 6U^\n3_k oJ쀬g}k8 w)l,$#?S +EsPgXD1GRӝi@*Ì*s߼ć#5%DA%DsF Y2-U㢡`Zp`hb88QD눬BJDP%wȔg"!(תscViP J+aQ F4@DzfZxMZB * 8&L h ZsU4 "HS J6"h"s+(fjCqT9Kk\$mˊТXtֱ+6?QǠ  RPC$0>Hz<1T]|@(R8ʘ'hڅFƃZ9PU(dMM\HF8яN̩\bLxn>0`(j'0MwLEpc<: <Ԅo-{"]!BF#{O8NJv5eX`q<am4q "GF1d>2L(\ )qE+`%ϊ{eUnv\#PD&D֩:H&V+)9eag#4Ը[^(1XZ}8mP*RaQ(e& o ;M4xAv*F!F51ƗRj OKr(3,t-$.CgO$ #Ⱦ.u;:r˽ݧu;v!<0JzRx%Aq|Ij,kmkvp=tWӑOR۪:u@כ2!|J-=+.! PFXnH03͂j+䧔b08hnDSo,(c$S/2Prf WZYe 1HjRLpic*ܩaR )]43WdIyV*KeTYXQ|A2,؈DVQkOÌsqOSږq<X *^fa,xՊeݏq$O *hôo!D)KSVdY` 1@dnة~QAO #P1b13©R!8E"GV<E,$ !` "-$1*{rDj  h̑mxҽ/Wq9[oa}ՙsMV}ᗜjbot{7ռ,#8^gw2t6unv 6qrr. wLrNB?Q x8@qWQ}t^)"QA<&JQD !р$("#2J258XJx5uX^Vs\,+2zjJY&GVWkQQd]Vq8*Oߏ.O/>$ě+ĊZQÓwX^Yjk=6>V;1i vwUu_mxxxspҮ 7[=mX]8{Pi3]A^ڧCA;V }waQ}~Z,O<5 Inj6})}UvUCX>>k#紿;[y6z%=j˻aYcj' EDDI]NM׺Pb#:FiCEڭxڭ Eh Ueluwy7J\`iN;Dt/,OpU IA=px$hcZb-Nb((42V2mZw|&F4?IphnPDOt!+L@;D&Cjr-y:PʵO^iTSrɷuh pTp qj/v.q 1Ď)@(G^ցlA FF8͹0ky1j<o9LtY(-H7ElVb{dLKVNib>wUU'ă!}l-,jOdSJgA(}?:˞^|I!A MCM֣m #q%5 w,l%zKd { d]2rmQVX)^0L\;$I/ȩ /לp){*V"]҅q<Ւ)t b,`QVĉ9ɵZ;4X"@ӐRy0QM1ƍB kӍxCRwȽQ؃|"Z"SiXNڍ׬ [)9S.mD6%K4v+2[hLi٪Ywy=upnj֬x`M]n{DL9U' 1{׻nqa>rZ)bBmSkDTN~sr z9HNU&iѧq6)Zm鏵߁`Z3pwcfQy_nPԽMG@cj:$xSugo:u Zɻɷpd̎1S4dnt^|ˎ醑 W.]WdU'0$rqr^"xtOi2ljlwyA98|6vʆ8$՟Tve^`y]ldwݦ%*(  q*eGN9âEB$hDBps ϋ:|i3h,>MWWU_jF@2ѿD.I~?=>gbEl!f리HC EWqxME^Wd. P4PaD<-,MP!j JK5接BPZZE#Z0G7y$YcHx %TVz?񿑜ӿy3 nn}>jg@O;7f4vUB?> L223io>pϔ$7rߗAi12vmֻN+%sŶ.D4*VAkn2[9,2ZSZ1B<Đv`6kX&e:9?x~p{] i*[OgrB˞+׭ط& +֕1Od#JXZŊ_lEBGVҽ=ي YW~J"SG s mAgS'PȘ'zϸ.Z> [t+ѻn.8}꣹;60gl"8Xo\ lTAI:5Y-{i]֞6 dcBinpt\/̊w?vO)vv?~L'O=3@QdWX'^'!1tV 53f*;=p2cݶ:)ȣfϢҪ G3A4VUYAɸKy>ϱN'_V;[Zt)/=Vt-DC/@TOɾn*Nts~;qt^Oޫrf~Ge'՘tݼi7_.Coͨ.-څT7iB/W!tٵ^Lx ='y_?< rpa\SjGf\gNl P{u[ 6Þ>]X^coR~*c‹T]X^$GwI k"L~27@cGiˋC3:C.\ ^_(VG'"FY SzN`U)ϵ*J@p2Gٰ3~ }a|~0so{/svޚ{nW7^g~}+ks3 $OIk-=F/I#89##925ڡ /o>'NU=-@2+w,Ad0! ͵ O?V6!2^I~~]fOe]fOe7_VD*}K z%hFX\NQk+ ڨy>'_?/rIy m_t1jYIU/C}[|k1fZ9gD Vo &jUSRK.ȕ؉%#VQMi!j. "!,J!W/ 9-!$hs5$SzL:I!Ȝl1-ch#z &zkrAo/,W!DAyAp1f\$D]r}M.f3uxfLz!Z~ W~Y]B:I*oi] íK@4*ΗTpB:4K8%&+d{aRsE9QW܌Tܚdq;$%Ƙsp[2dGސ'[2iUl:Z, F1 Fr}ZZ,&6a:OrI%8t!|MPy[ct3c~-BP-c\N|yC- rti]L `V9.DcX`w#2b='l2E=EޯETU=37f~<)dKBݞ`\6L+Qh-48G'B<!sb~ʽӡrZPZWF {m0t!1yms%dO&Upr~:͞stYͅ\_\'MvFgGW l9;h}=L)B /3QϓۂZV)L\4w*q#2t6ڐ )ӫ2K[LA*B:J( 1y{ d# 5U:ORo`襞ΔF悦Q^5b;RrXcᇆ񣼈V8&H)ilَ?q[ r&w:r2yu ɓbY UHy)x0i-~J\{qBjeaHw + ^]w7ƅٻ(EBpmѹ(H"m!jS} APg%%,7:2ii\ lI0>?ގ0g( ubzyvAdRm*(طg&u&vɕ;NKzѷQã޵5m$ClV~QI'rYvN.`0Hh]I&ˊ&AuOOLOwyu :jkuvkt0^^C<_~V,ϳ/c%^[X[8>#H;kB AqBcČ. (XM!XFQ¡8 WjԦY0'qV($mjMTGA"F:q`5қ-v[߫vjKK2\b(*}э݂cn#j8 }5Y%rA#kd̅1aJ3QE6g6eL,=j29bI2:կ׿WKfjX.ѡ hQcLZf (G#G@Kc2F< A c"BdnG!ќҹM'A2LCg@Qnx\W'珦nY^f ͯY۳`eӊRo)'mΜh?C, ltN6XɁX_C`TcEe,Vc\ȃL]x` Zw$@{tσ/Ǹ+exs ȃ@xl7&H:=aH3+=Jeu$ӛtXT-+LHxk.-\`KY2h׻$϶ji@< L]\FKER`&u{Q.($+w12 Qk /I"T JdbPoۭ_)IMwzIػO|KJHGǝn0t1pk_uG\H0+M`\(̱TsN:&l}j3[ܸhdʑOeQ%QM Iϓ[5L rn}c|cg, L Fxs 8TSR20 j180N=`)u-ZK%cFED0a#FH#贊y$lKb/.nQi)**懃jNW| bsYXlEYs82q F E.dU"f3WuVܲb9eD,F!CM91|mlE~(B촫kp!bm>{bd4 cb BE.v Yb$0X FiBB4LƳ0`ƢJL^Z{=TM,ѪSƥA^*خ ;R "GZJ6+-pyF@(+i)bꖵ \mVZ V"$1ޕ{Jzht8RͥE@ UR롓Qz* UU-m3z*%pB8%pꪴ.sa[s+A{vd$.%*nR`n\շ*K<0Pa E1ӞX#zF AbbgXJǵ_Cq̔01MY^ dOe#')_Q2K׈rgZ%ֻW,`d79-D%y9(U,:F$1aH)%vNw?n(9 bmńLPi5,U, 0f&c0G,Q2_5̇Kq`r \H&vK#H~CY؁ Q6Gji)˅&ʼn'w/\E>/aNLd߳C5^5.G{n)cQܺK}FIW'>db7k*aw'XEoGϞAE-K>vG+ADO[ߒO/e.8 }>}ݿ\xo_ɛV]??I}?M^tB&?jX6nrƚ S[S0\4mr*?/#xS'+8nʦ* BxLP6#'i?h^tZӄ1y7o?h{Շ>|}!{e4l̄-wGoHǾ-Loѧ`0I=p}jS]lֽzro' H, %6Lti?]Ħ=pɭ>2ܶ =YyΞ}!L_K]@xWݘiN9u7M~џ|i{T·*MsC{:[f|,Nݟ'g1T؃`y'|AxlW2E«v~\}?Zޘ\zۊ}'-eTJ|χ߼NA~m/o] t㩂2/~LjZo#>^_ƯУw#w.Jqz`{AÇG@GFߺDŽ)#uiIvuv IO2z? i&%Չ_y =/ne(^7CM68`d?|O&X&Ѡ.ec&iN'p^TސדI$d^&mXyw8njL]d B1.4F#0d&Xj8e\ijS|ϤSOs|iz-q,קto d p=]!Ti~sSΝ8 D|8 G!xhLFH% s42H#-mCGPc*(ز$kB%inﴖre{y45ߥ=IyFK@ Sj+GGYWq;R}-=Vk$z7zTnӧJ6G3Dn" K#"اWK+2S->V1bTC][k\B.8ߺRV7aB0/ic!WR< Ov ^8kjy\ ^`U8mx>"MVU.,l+|ҷr1UB KkPNOOyYIUxU5A^ʭ[֤]pK_Wa!.⻷d<1*rT|m8 ߷βQ1)hBJLCnr^ݹ}͑m)(]Bv+:cKPA[g){:A`8Se:T#"J("`k)$L<tz^p4ly=c2rM&o[po)wYL"`]5QQgTPMyEe;Gt_Կpn<}J\eRIJhf~D< R;g\6ex°O*&NIYv—_ Ii,ȳU9 xTܡRaַ\6Sr^T%Hw.kMrՓvq9R[-!aŴyii].yAMblM8BU+"GF:f(Ԉű V641V`#H4zP`DG4HH7afs9n$Ѓo$zeÖ́eJmB! I¡󢨹0ŚO2ܶaا[X(7CW<SJGw|ZUea\G2G%mL م-7h4|VI-բr=_t Kh+&^!L»>ڳPD@!Daݚ4 qʗbkMH *aɺffUF$N9fؙj[ >0}8`.h2@ݿe7BFhP%|%4a_X뮂5X}=;W3Q3QW *T#B/1j"Zڇ/ LA-A%槣K].i|DÊE&W'Xzs?ZB76 59eAe:࠱r}l9tݧS8 #&g5YWCf*1=dH\$^-h&Z)%9BM )TEBTT'!e>{n+1a{rLQ6j}:oSڜ-{T(13.Hj4hh W}WF\$bξιlQ]1us42/G[$O#{wʡ>e2 $?=P #Vꪉfd,˪ڤX5l^VS-t+?B5!*)߂u~BZ+8(o )(\&J|_;5\,$S1*!/UF@[XiWЅ=f'/A2$/A2 Woi#bp{Qp Ѹ(f0fqb;L@a:uc ?$TiQm0M'aH?}v淡omg+w~0s{*^뵉s2F&fVhN9`&"?Jn) FP#/b&(J7j@KhT}hܛ֑Nu֌#yHT0 2h# x"[itLb*r-rQEXxo%2geI@h (kFn6]?xV vZnTȽ渪/w0hIaRpXK}yAzQ9JHvaZqpX*`V4I@yf sscp&>ܠ*0c/#3vC4yo8Q?6Z|-0"TF (.)qvy9qY1A0*xM-pejF`Ҟ.1O21D)r :PbԔvt1}ą]hp] pAtdVut4nƦG@j"@ p]v5 M7a a~_|ݔ`y0 )fvk'fa~M o?4w6^NaNra%ăt;vS8iquͺjөJ4\xzIeϿ ?-4-^.~ix66½b@ԟQw]&j |}?όldӏ1O6$" tϘ#a?\vF/}q+NxRoeYًj t^`łM79'<̧%^iŋ-1 yNk6}kZ-mEɔSsݾ+aG3 4; mש&1'͂xkRĖ#ߙN}Hc6O' ju>HGRt :0_eƏP~oLnYS+#\rJ8ILEs.j*5 {OZcv֐)ow%`}nI.3xf^:_wqe8^~cݿt?B)վ8W 5a2'~><]h*SW2g>/Ңu}}F >Ul'U1"Nq҇Q)u9@I[Soi%T7wuh;Iu>JU-[jr7unʾ+͟Qԛilt{3fߊ)ɛw;e -chyD3O;+ihۦ@U|H *k$kLrN-_MchW-:*C㎹ 3UW(۹ nB(o(&pNڊIU۴K.8u\Hܓ|BsGQ#JKVȒǒLPfr)&xS Z rLKoT"C vJ : FzmaIf)$"H.kB2"p8S# Z"`'5)́/q: thڲhu501TQVA'Ospkĝ BVc9PJ`JEqù 71M$:yn&.46td/Dr&^z㶗j0";CA8XhC4K&@*:9>pI|ܓȕ QdH0D+[ !$kd|鯁sAZNeF轑rpȭ P12D)c<"ϔNje@q焴 "UR}CP2aFӠAAsH b"v28S -UĄ)ԊZnJ %*ujD)3eTh"z [^TDr&8֔HR23%%-eHÞ˘,VF9a&&yuWڒb$c*]Vt:wrCdV@Z"35S #/ EYJ3+0:p5[׬L\fޯl?HC|YhH$ ݲqV)sT ސB8Ljäw&#.0~:Ā61@.։k)QcPOv",D> a|"׃I|8 ?rqPdM j䇍AB21(5נ 7k^ReLL[qX(/. vp^RO$qϟ'6 18b&_B#u!=^l -^HIL$D*/n n'ͤ-e;()5Q6e׀}UkvRUDMiyuu&vAZxlaex { #8h-M a9mMТ };nY}$aG,;edgGjYDz+9Shk*I-Ι&UӐAP.z0kQld9cZp*\ertdb=׹%ۦJ_V&y(vߍh*?JT$b啡DMr jjsCa9.snLY˛S H;p^Ǿg_Neơs *ߍ ?"# A\b/FӺ+"_3x\~isTځh%f2\'JA f2\NE\x/A9)Jkki.< PܧJxtTz]Y/zL_*8T]D:U`Zp`hb"8Ƈ-თTp.X(v9 AQDo&Qx c6H`x{N$rO%y|IpE^+UgWer0cj0('dN^uU(l/#Qe+|h2e7-}?G~6_tx >fR@ ODoxBV}D{,wqʐEzx1Q(lZ T:eI@`m9fK.l `lv<}XFXyl2M%ǡ_Ys,`~ifԉf8zp6{X]zo8%N7fם]&j6gl"|/=d ]0Hv ]z*^Uj{18Tt06vҚCwRDe@eORq IAT,NASҀ7X{DX; y7/KA?PڻNcz8_!r.9Cb@8xeq@ % $eY俟!% = 93=_uWUWWWQK[= {iǕW[ I7w[ AN8C˻;gn2J稢_uX*ݪ!' autX) \SN)u?-kв E>O@IhyG!x le+-IJG'cKSe Z93wѽf\Ѧ8YfSDHt+Dp0dHq34PaJ[crJ93aP %ݖ Z wZj2)(SVQ.s"ѝ4:5Cq+ (r F/UncOvNU78Lĭ)Ra|NIPR{Fa '| r aL^gl05.B,WP嘒,%dT.8P-sF\7z#>8ȀE+iUjɝkκ M8h4m5u8,2x/`# ifN"|_~.ֽY G]XSX#rQb͉7,7^D7cPHALPIT+(:CA.ofr!# .r %L 3m`R,GIç*#*eHDNpŰccĨ(\81-: v'ƈׁp!׳jf~Ye4=~iG{u=.-۪k^;ۧ 󢉷Qcw^{O!9|gӏo0+N|XdaUg&fTt3X?<]ENxӃo0Q,-|Br8l}zO8jP\ O=XU=++5n7)aڟS̛&98ˆȺ8bࠅVfUۓ=it:α: ǁƓS GBa0xK,ae寂BH^<҇J `&,4P$Q`Ibh h#>LZ@^R^Za?S;`F@hMoԎBjG (!Ⲋ\)$!#u^P`]+o30Ճ4&\#%8zhQ#94rB6֕ sst[)-t,)>w ޕ,bIUݹޯz*E7ƨvhDJKpŌ*Ocԙ"PiIU,:kv&UQuu`Í+)l"QW+Ql^TtH\e,rAŲ5ȱ1Z(VWYݷ)Zơ4~`jF$^ oWaT_ocLHzxet£[ lz71\({yx5IS`zZX.ĊRfdk4it?1}\R6 o.w bX28|>te ;O M\fMctW,7Hb2%5$8lQwF}Ruۊ .% C@0h.ዿ  rbe(՞c2 DﭫW^-OJc39O>z3Y~9l&EtRj7X*12rX8W8FS2hʌr%-_|U6Ԃ˄ Y6FjkQǘQ+v&pr*ȃ=lⱤP6n}ڽ;s^u!&V >[Uv3LSN\3* 3ŝc:PFZB3i`KTq.L+ crLT- F)$3du0%3Yki9FiAvV´ Ka& ָEDOtr%CAԵW8О5?5=a<:([L校g$/D%x]:#\zeOkM?5~%7~<*KƳf&{ (Qs&a{Kywl[8W| {ke{>|2M$5Zs0{oX03Tih'^KL[O$FzN+GCZѐt>GCC.&x*j^{آJ*`13 Ty2赽(2Rj &;*phf1O^v,!4L d}{맯֗wwd4Q>Fvwcd[;XȐD#8XD;\]6YSrݭN ӈ tkN'jم*pΞz/ƽZ lf*;Wy[t$M%%{ c< JٜzwqĘG>xThW/us 0a,/}ɦө,<:aà)$w/ESDEQJhگKÑ{ΰyyyz2.&zLq8jZhE.h A[5oc^DxP LJtp[AeY1R9z䮇Wj2ίVєIGVyHÝO?w^Jk=ܠSTX_5RXlr,*)˿b7hPHh~b >? (o,W-Bn + N.XJioZr B]N#&)Ay? ߠ? fӢ^[;470`5>@@=Ǣx`qK/<^Ʈ7MtD ft=ɸ89 x B3eJ}v-r,*FaέչpAJ%<0s2O`eB{=0S0%I)mu9|_,Xfu&i_n3e%P<>٧X=pKk^0T4E?_G}t}HU$1/x vF=\ ZX7A3ǷCb ț]R!Ão0rD J1&f~<\!c5EMV{ф0%+XxbvQ(Jmg8/\i Z6pK:s+a99r^fkDJTBH0wX(sEx N\3a`4;E LOi̕ -fV4U(TꯠjvG_l.*nz4AQZ`՛{s:A/JF/ JPb ȬڥDA_TW8V=wЛNvDL+M Q0 ,٨ac1 T%QSז}&r!,?2ٯTM&7G:38,0|MS^Z Xe0JfFdiJH+N9a*x~lIJӏVJх ('`=R`mWj@`1+kb Da6 RYB(/ Ȱ3 J ]]xv5As%@:ﶨ%`3KE10=p 濎0:Y<,sO3jgvfqL!Dzf3_Uϐ]*D>p,:3eH׎8"ۗ2X7 = A&Ye-n9] dc3X'jV:4A۲Rr5 yjDٻ޶W(/>[M E宣TTQrg-Z)YFDgvggfgfrLG+c ʋhA~GZ*aLƉ?q`uP&h KAχt@ONFSfF]P1E}@Y9{?|}PfKϴ7vey7[@y]~7ˀ—~-)3ȳ~+g3d ǴL tF -o4:_k__M#>P|LdgKL;X]±wŠ3g27%B]QZo߾mS=*ha-Փo3~?1.jMٺwQt K?F()B2y׀,,\!^.PbMׁDM"FX([# aXJ@Ba^T/!H͉3 T0IbEŚ𧉡/iV2jQl/f~ȺqDQ#TNeTKM Xln,Ks#ǩpZlXS0\J Xr8Vqsqrd2HAh09R%=3^QI8*vH$q$nuk(8;Hj$D$c%Ô8! "iL W̙DTHsyDA t{qzGխؼJQjwgUa20un!}`y鿖$Φ|5r6ǧe͠j7<̿nEzWxoS/b`;{M;ƭy%fwl[W?4dn!pi8 BX4Ya_|t;\eʍK6,[~yJ՘) *7&8ҴY7z4xkl`^s"zVuJvb]ߢb%ܺu.a,Ěj5R-D$y0([av S{-iL W,Mbx6Y,Cqf+ăXWip5&XƱ ֚+&c0NQ]*Țe4abk8akG6\jd- K0AVϤL_~Y kЈ[d?{)Ni"D6hZNb݌W& ʭc`Kh)($JKw9|)}+:M{$}P,XЈ'>[I }Y¤RbR9_GV~v`,I=kГЕL\Ζplư X_x4ևFD4tw+o- l0O#lɎQ-|@Tu )|4 ~pU' 5ͨ;5K&j.7LNTjKHԲ ïMkұ6|M3Q֧^Z$/ݍ{>+5Ik9t_J*v&pfW&>='0/ /RW#p_Uշ+;MF/SwJ.mz @_ȸ]k+V{I8.8ަhaVQ, Y%٢nA5:Mn7W T{VKnVד3C];EI[Y5t]]$`@WIo]P!6%49߼O e)Wnf< U@UZFiM_'mwd}~LmyArY;~ƃ` 瓻a5ڒ-x-Uu崶b b!UYB#+YNč{6mJ.Y\ױIĜ‘DT'aiBbRM8.&, [4I,1#/$"i;5rwg8Y[AM$Wt8'+NsW$cN} MS4x{'E_o2M=nFK'읬%sͣ2ao`GMf!+(.D5t` I'Yᘭ< ݻCZC"-BIP;Qg8>X [jӍ RBA;/#wE 4)nX2dezh^r|+T4n ׵!SRQ̷͵ BJNVsUEn6JZpՀ\1bb;Ew5Wd{Wjjxv:YjfWGp,KCz+A{Ltu Bk:A A%ab&J(a2X2$Y3 ݀j9J͂*>% ~%'V X {G'1SNŠ #+b[Xx,b -'$)>AQ–;9 7LӉMf(WH-dj@o1mJߔ8i5-Tyq^.B i)7oWwRp78DKMl#rt]B餟M[Jc#H/2Tgd;#V!zlw0,M=Tl;nɨəxvGd=:.71> Z|bQ4'.drUglr <-u92C[Td"D)mŰɧQ %F<͉{XZb]t%FP]pV'[,Xͷ}W`V V<tԉJ9ƅFsyoP$#[x3>h%%lxeEdWHv J;kXsJpbh]le"IcaȘ+%"FbuPSDG F(Z$I#[ibB=CflИ{̈!Q5|*p $Meho- _VE~\e1$ (ל;c,}20E%3@@r*MBќX$g%"Cϳy::[t|2=_žYU wwiyUO&`x#ҥq)9}vɨu5p+\DDpT3Uyd{tRb,{d8Ýˣ` H1DM:W9c-iNG 47,p}9 Э M%S]9R ^磙gUvI]F@YQqQWbR=Bp TL Yc.IkHtD41hq{Agu i7ukUx~HjvwgW=9زvGd=f(뉯~i  {}DrT-Jѣ]Kj/ǺM4d{ϔ Ƴ0{XӅoa;{9^s^(RI>0 G2$18(F6@8+7,v Pm!#K5G{E+@h ~r۽pk c%XvQstV"9xL޾wm:pps g `5YI{>BIgA.w>wVO?0\?K8~ UEˌJBĎ: wvz$gD9I{fA,cB&E^aȌLAA#a5,NiX9[ʦNAGϷPLQ-M `(@]&TZL1 2x6Ҍ2\qa@IZnhe4VnQ4f hJ A!h2 ,MzeU .BնOT?Q'"] bj:n rD3[%[d8&LyB guagRq_0D,M>= &AF:2؛3ꝧ dž[-5C8mq)q옃I୴p'4 2x jv)Xz_Uí$ʋhA~[h!{02Xw}Bi1IsgAmnFE/B @o>lU.˾˧r`0^"Q>;4HJC`^xzk+gzn4F?cEjp8-S"h3UmJV9U]+gKƨ,9X9%p619 vX)\f(ɴTYB30d,R4>Gޡ6J[1ܡv:mXk+7ͣG0YAdw؂d\bdjQ5!٫Wۃ)iY5D)zZNHjᵶ#Lj<+6F~OQ_e ~3 8FEZi}6׌BIj ΀NabQ9kKͪyBۆ#1Njw(M*R+-X2F,D\#:Rc.5B L4(ԂR+cDgoet-9il6H&ٌ.JZLII--R48YSI/.ՠrK]͎&)ɜP #xk!]tߟ8 qMQPֹE5:λeB4~tAKt?om&Hl2H*I!ú.^pzLsLXfN'6u?+(QfRj&,f%!̖TTVivJ Wzm[ۮr Qcu9ﰅ|Katl<}6!yɀ~'t#lyOi`9OSP|a>$ahsΙ(Ԭo SӾ8.Q'>DgS"ulo 39H'>Ftr|14«d_A%`ngh)gF{@vRK Ħ*7 h% eĆ2z&'IlھL}ke#K*ﯞ=D{pGVq>J!Épx #LURl]Tə? TZDu_ă@`H""۝,Ii (0D)X(7 okUi^,.$"VoO$_77J'g+ܯ3I(ג2'+Os`p8$pRhi҅bPWe]kkB˂hi AK%hXFZPJ. JjJЊb b'#Z$RU"TT88$s58s\-H FzjY@]Cdi ε'I:$:˳DЅ\Ha))S"wMK^1ƀyݹ+.0[֛*ĶA 8+H>t >Nڑ 툄Y7OWNJFd5S'a^@0'J[O#sa JJ8zoMG*~XTNfۡ8 IkVJ迒 !Г^%h-~J d.Іۊ*J`߃ILtw`ߏH:%ުu!SH:f鱄%Lc٫Fl")u$ &yZ3E"`^7Alb0=P\hth]<U4V2ᴕ)R[19S%iuZl# SF%5,9 |dNqMɜZdNQ36S3|]<[ZRdN ZLK8vTZNHm Nl1t2 DC#-PQ%,eT)]t%Ơ_G쾐+"GAeɵ_fh(T(nt_Y$r4 3AF .;TM TN[GP*HUP8򞱎Fv7< KPCEGyK̈́ze1bqE3N,) t4MvݥV; aّI6FK#U)F!X7DoQU-V 1M]UyO&3^2KFkJґ5L|2tFi,Z|6fQPn>en0uUK: gh8c% ጓh ]^Xn cbUg__f[j ]ǟE5[\-^_VƹnJq{Y[ݽߚpϷOo_T?r/m_+4FK2߿^9R?7,>W8ܯw/,Ni4o.ُ+\A]Fx}CNlq]L~?+ܨrՐ,&TW6Bw R11gxQqo5$л尐37$j[;MF[*!6޶ש y2@BDlA5|z׻5{fR11gx1Hi4WۻW.zr&dS )|LQe&Hةӡlp"JQj ҹ~/HRsr.yBΑa'j{]Kc$FS8!L^Kq$MIsry'̨Kpy'M2KpG2ެF߼=?Wmc*UҲfs&3cX1SBαu+#@>5=vˮB@5F("-400UAr= Mph:h_]@it&Wjȁ0pLq"ޡ1q-|a,M1jZ5D]ʎWfS 4?ş_lv*f7h/96Ժ/ $1_e_}Ie}^.2ry+2Knf|d̝]"Ys|srK$^"=hsCY$hďշU;Kx T8)RHG%83s jFjf3Cn آuJF!jxh j7Lg \[g)baapgLr8V&d#@1OiØ=g|_);Oq{yQ!u9^_=8wTjTRJNiuNu9#ŴjtΚvX+mt#*4L zH-6N)Uee3uY!S>]sC; G%0z”ѲKxJZ'#D%{qza\i(Tr:grF@',.&8Rdʁp*1%D ưY/^wrW`,_c/cy՟_fބf섗\7:@h jKmlJ|]f9gՋYj-婰fwbuj|Ru[]=.^p & fsOFgh+|h:^_砶0č f²l[.G)}'>fZ,J4VW_ !=*Խ}0oGf k~sZ0z d"ק/&|̕'2ʙ5p+Ť]=hہ`*Lfv.8)W(D TMX)iIÔ2S=5 O{^O|'\Wzj0Jè͓lzs"b=^"p}hB8U!Rn+^$aqOzDs%iwAA4{,AKs[^$=KB{sC2cH~1js>M1R3M l'ROC, lJPx :x)e6q H8,g<5};T+%/O^w5B Z޻_uԓ ע7l~!_ardž¸2$ys}|#,dyAA˽ʣnLUUG.%=i+c${ܫ#5e=:ypG^pg -T Ѯv_ &v _6);] o6Y㋌4V[l(Cuge;"&0bLu=Dx%9f'RlBRqd51k\mhJ3b& Ckȩ%GEg3NwwId ;9~Z 槖 c|mFW3uf&r>w>߅347i$j㷅 IO ~Ѹcw Saj1,7ݏn AP'&zRF*"6y!ZQwؙOy>L>C L01t601!6 h0c2s0 AM.#xI SsöURSwQ2L-T2L-reKVfQ 5LI,8x5 㸐\CTFY Hbe"B!mtFm:0W$}|kg=b&o{aeo` Z ! Br&0!X)BXSԎ4 ܐ"2& n:P#7|< hq<Hr o1ܣט\GRBMO\ꢝГɫؙt?ʗ#T>"MZb]jۼ)|a9/f!HacQ/]H< rEo1ٶ@ls'3UD"twi3E[&ii* ׸\S&<ƨmo|Q΀/%Zrv;RW櫄.{̗6xWNQfNvtynNS)!7iY||u6Cv~k|sqr*'i.\3Z'=;i=pk_nB^*}rVM󖍰B閱*X,r`@Q+#'BKU"`QG1xj* cB: $4Lm̗mسc95GպmVN/#2F@Ue | K/sSXTgvRDfrs!%3\F0\G M`ĝP#\1l Eށg±PiiθH Kc`w-w8ejG.Y;c8u@)_v>'< ݬrO0ss WOw[X\;bIoFk`(.]K?=x2+>\h|;s듀U%MPv\ @~ H1DOfoj0s{G0%c )sLdjڛ_r>TcFz+ q_˞$\ٲ KE޴:O-4C5Ӓp cq ZQ1'arR-r"Le qYSˉV!PjsE(@?Ifql6È[?no ap^G ٯw'[ %p2*ijpc BP,E08"2CMGf``ܛ Lϲ&ke/aU3XWF7c Ҳ3m~Ϲ`s{/k F|.>- /nOW2X`/nfc |0 >7STGsw:IW:kfh*Y5 O1tϟ\a= U;Opth#qCTV! "#֨hB >5atHukrLc:Jfn-j)$jj!2 dLEq@ AFL;!?9ք$DLW[@aJE{`+Et9~x4KafUDޣ@DíJY%23 ȈVia-^4 0{^}o?GWf;%ȠT6ks!–qÔpiEtE9 #FSU9°Bgb&p}{%/n|mQ6fp RzpZ^O#ޛ%f0Le0SwS 8k Do:KZGWݷ^Mj6 w_QPJ'Dl+RL)"ZoS_jXC2Fk[K*_B)<.U!E$3 &;Fd^*0FA>\(X?g~*5DOO܄z rT34t( fⴀ$߹YOʎ}?f{wpϫr(?7瓳ʢo.fYT4A1f:K7a} *3Ð/8Et}U&({nM1ȡN1XyICn֭ |]:=YD<9*Ğ딒_8H5?'sjḋ~|ՇיOT.>{"H` FJRr:E(aÆ8X_e6Bꠧ8#KAfW'鿰i ͌I2SY}^=@U1 thu˄eBRxt@SǓz-t ǪBpLeMȳtw 7TQMëP=djm)O$!G>8VQHt Yh)=Ծq8ǼQ8mٗ78b}<(&Zpi^Rޜpe!˓<{ 6g4ڪSfa~ 3¯i| QO_S͛XϞQ)y g:VX@H:ax$:X%JK->p6 s >Y&!ae%&EK$_z\#5@L싊*E 5Ê!"Ȟ`[,4԰@6ϐ(ћ1ƠINELx4Ì`IC9΢?FV[t!c 0XwfA\YUwaqh j{'p!^|Ƴ51B\oN`uAP"ls5+cjQS H1 k=@ YE1ʩԣJ+)ƀ0R: _:X֦ުPLa&Osxt<#J׍<IFsq$gs%U=2gϐŠYgiD )0y*'f g Rʻl; Lsy]M?kBPꀾcGO;mGyWgAz (-Xe;չ 2(Y(?^Q9(6赓sRzԂ`On* fr}ĖP![̈́xs>"ͅI ,x[pP] l̲ZpyYW=.wmcۯfWb}F9/oDؖk3.ݹ"+dhX 9 j=K@[se}&vNuӤl, Z@a!JMLH&&rVnsO}?ՓY6<6Nn/dۦT[KuC=$buhQ2ɦdIPwkb)Jw0,0zᇉJMߧ,ĵ^BhU.juL?0N5=>DZJZPK'7i A*cGW@ס%V`Ft,O 嫘#C#Fn B;sYThI'%R[kϳ-H^>J%Z~OQ>s DPzx~mE<`D| ӚRFM y nދ mJaTK>K'|a"m[ռdwrW!8[jsoMJEF3+il'4%)ʣT(q@ݧ w8mrbt!2e+d4Y>T4q$rK$Nz0g3~X2Ү"sv׊2\ЌC2SDY#vZroRB_;śhOG҃R Q=q,+rZ O-4=@q{ w3U 6#xTu= /{ { IG_':ȩ C/|̹fU"?=N}FL)L^'@gzi@.bw1?"q;\28jozhJIe]dj9jL(S_!&@.((%ќ1;.*Kf,(᎓͵0!3iƲnkPهOf2Nw\!FqWUE w䊔CT. a(RJh5@(\J Α r(#B2'T&.q z>%[8ìv c䔬@ӈKNdKa5nmajkqbʼn[bŴi 8ռk~|)9SJlI?.-jq<9Q77_U^Ԧ((5*@6, !e U0;oegyc'2;ɐ.HF J܆+,X;Դj2x $JADVc[jL];ڛe"~y3&aTyQ|#$4+4/[sE̾fP]=H[b5%˜6y1٥ N&aW1T̷?79Oo~܊z&[ Y<_I>-[-*cr?s y ;@Ru."[T b*ֻ l͹ =㎥vI̩;$nFHK<iq{{֭J_~h7woB\j=t0!֯Gכ]m}˧frJݚ;w:#7~(O6f]g5B`%1!{O!N.NnSxo4姇;~r3 9bxTA`;'$^B||pͷι'ۻ`k5]šܓ+1RqO6(rG=6>ky*{zp&rjCPv;hb [ %d|ԏKԏM03Y`\f/WMfK_,~JzA%25(4MY:/h\B1B̩KR"]*,1DXiKbFT1@ IzΗVⵋȧ{_6>=Bq3ִ E׷bzZ]~Ы-{%I$˴{ڻj U֊{]:rMz_VY_qZkPgu8իUvk4O^߿Y.E3+ws}oկ̧b9S b*f]o#Y>]Ql1)m\[Yʔ\|,=UY 7(p!sF=NszH L'ٶ-Ny؁{MM1XGQ]{̾ux[)oqf-΢<`wj eX I6 e}OJi$PqjcL!#9əPx`9leiُ8yQ؉KfR3 b!)@Siq&y$(t!+KD7v@:`3vWbp&3;#q()%9~k;r*Gra;b00+oE!stnOGkdx(g*vx5_o*17ԚǬ0]?ϗdtKtK;X_vŞFYAv}|N Ҋu/S9ݫuD3qQ6NjR1wVL(&%*5cXDu1uD 0iHyWMR0$8HQ\e)A\  'Ne\];yAVx*o1.J>"VѺ혝C ,3qIFgB2YѺTns66I nf f0JH *Bb& Dr_M8GǙoF6/mI=CF6@. !r E$Rޮ}!b+@w% [`V{Y}> [ZO3.;ͬz s3Ā3FꡘD6th O PeWZe髫{b*KoQyܤq<)+*;Ra+>'1\[@[5qS]uQ,zݔl`0>SHS#n1g MmwsdBӓL3BxTLS5mbkUθ$y!,P9\\+1%Xs#tAM `7Ӣ~Tvt-m)}wܩìh(j%%+ )lHe Af J+ZB+yfht*Zɛ7BAѯD\:A`*8Y+I > 0񫹓KYөw3&-_|"%D!u8]iB<}5jfx#W|ӡXoEv*U|[Aـ`|X`v~٠غM ~3W=awnsUG_[cbnvgIr?z, Ob䀂Nz;QZ*uªǩ›کGks7 "=V 06XV1հ58zy&5HAcV3oGTx-A&u?M $uKa!/Dl wpN3x|["3kRX 7("wrlϽy>@P+5q{^œ@  =XxO?+_-WIi4SfE tFJTf9 #Ԕ Omxu!r;iA8V?t-ڟA[>p }|Ӫ@[?@0pɶ\ q/ ËӂPKPHB2 zdVMc08vCe]!CN /QUׯ[,PD9ʄSNE ـbI uq {QG_C5?9cTYCªR۷faҔ?E HbDS\ATi`k5OšTi+1J muR.W_jpjt]Ӝ3?2 Eq^PlpM7p7OD(ese Bg`k#C!Aȳw # R%(ꆎH˞dMJ dѷ|+8{}̗_o_.U6ݸi 4vK[h/}5Z3F@Ȋf JV q9 B,#8ōm+׫Uu}Ы:m@u*O臷\_8^Xnx!w>x\eeJR2hZc2KiiJGW E^`J#S5Kxayb"43h'13-V# 2H94 3\Q"CI6+!JZ jQTܾ~:e3a`ȩA:?H4JpLrzv `OnW}Ȇ: 3Bphmg@B4ԒlioCI;N#Ύ3½ηUQj= 8eb})8h@u!-;oSA?/j-%[ C1:@_ ikݝ!PCxjk[S8rjS>_5j\]\$wM4$$aDw5{Wa1RD@[%tH\T< K[ ]7 T(! m"P,yr1@zzn՝c1h ٩ IRJ% C`{`S.YJ$yW&Q奷Ew-s asc hrm==)FH i$mf-> s4/}&?-kcG38A`L'*S2KU֛("P*@ "WJj!Ң0>5ƨ-`!:R)6DTZ7* (0` J:Y!40T@S 7AOq$Jyo"criu;񐔙C]0y󷧽9m 3PnuD`j0v>y]_65Ҵ#t:=ϫרxm>|wv>=kb_ 7`ϯoFdXS¬;|v2,އ~ Ϗ xHjB=)GwfBp (ͪTnp*@l_KpHrzkzcQ CI^0='Ex!%_ Al&4BBebfc8!wa96_!Cz屳屳屳o>mмd9\E!@Yi"HHwmH~yP:yd.Y`a'9/)r_V۲TX"_Y&$Sajh6mT郊hJ~Dq9aT=a67wrn.UZ7w#B/<"0\ 60if"(FC>LyEt!*R &_~ចjDDhu;,S.S'c@ag!*%¤2u]dr&Ċ3 ]Ly5jꯟS%EE^ ۖX?cO\xu)-W ::ʤikҰZYdFU^kSJ*</tHtM=<-Ĉ&vꊝTY۔񧫿/R5w~a5GSzb ~YßH7>c}xc#|ҿTEN#kz׵BY}W1o- /_?eKMܓq-)'ny7Z}aJ11gtng0N[$@C{n}X7ѓm $F@znהtP}1'7-"H>F:KDw@1ұR[%N2%(EƔ2MG:_reOIK1Y%MU dꀧvֽu4&0Ln?L9=ϧ)'1JF2ح Xq^qb@ThBnQigQBL+q,Yh=<59?~_2gLZ )dZsB̎Q.n \PYp7$<0 ޻2Eg'CTIS!=#}#{H@FŐR̦8xM't:<ڦH7Az ?E%%&yq]IzsUWwq(,j~eqe3$ͿtΡ9|!m>=_2J紩ji^_Q6aMiikCeݷO% oߦ]OdT6AGRf+$4P`Xcֺ;_<.騙1TօTT09!g'}Mr7Iߴt>J > D%~å!ec8J x2Bra >AEQ?yJ%y>u;@1ftb-1gfIgBPN$9W1螵0 k8࠮k!;48ǂ38j&R )Tcݰ䂰rVZpJ8ŔXAFڈ@.D0Gt{B!V*u%X)[g|:~Boϑ()a 2<#4ߋqͨBP;9-wTDk@tVi3m)3qC1t8EΕ-|ko]'.b,yTEG)t2,E/VQNYԌEM/Ny; ^ E:ьVd7H)_~ҘyJHO|w;S,x+j++!ш\wFef0c%<6B' {0g<2L{o7A q:cHPҷ^ǵJ,VvZ\|rNcN;tÚq6eŦbt+ C; ʄ`0яɊtu*&cwο}+]((z$tU_Gu&XTImŢ @+E-%6\y>x=Z;yfn 9p<=,vLi@ǎK? ¤aNJw+k^q3<)\F>:jLH$[':HS$[]UXkQt*)Fk?b*ro z oBP(z>;=  t{" f\ǠuXκ/J(5mnH($:lr7n$i\8@z |Bz &id,23.AE}?قJcI6RضJI&pdFi~\BW1Ɓ}1AK($ "#gf:C9w!Kuf QE#0/[cP.,PFikAI2 &XLK|q|r-8wn$! GrA&a#$RaJX nEL`ťaZ ;nJ@IMvCg+ jUwTkr$:G3C.9GO= D0F,5J/~MxV,O4Kn_-!V8aY1YeWUE ăeW;8qEsv.P bHQEx̺ 8jaz/\1" ;,g3.\Mlz 9~.F{!tKWk[{M/YTpyp-{ Djt&=EFR}O"ruBadg UߕbQ!C>/wӃ$wqG?U {4t}~ȶVU1g.t>Yl%Lg3_JZ}K~s g<yGG,S =~U9LbFjycR"E%e(9v鐢6r;FZ[#G۪* :P<ނ{+K9!0^Jxh@BL 3vnBSH~~{%Ngbm6v^1t)Xs^5m{뫟W1Ju*(l] o!0JE [Da[h &ufM/]1=')@bX+nd@!T:F?oHN~sSz=!H2O$IC [-a3l[eU=" hrZ1嬞zDO`A-ɳ4`Ɲ 2"%9@RZ>-c9&MNT!eH8AaFDX Ag"'GK| J1{cI)OCN[-1S@(ޘk=-DT 6u]8I!es=vihJ8Ŕ}bV』%8Dq;g4ozgOk S>!/$lvڂTX9ǎh 48٠p&8%h"IC# 23o pu BűS :> \#U/f0N7)ɶCR*;1㧟E&1YjuL\9p^׷WW*?̚?܅!Fl.>%ݤ(&EI7)JiGIs|gi4[U$n3i>$H-w+:ݮbF[DE}Oӟk P\/2\//c҇ V`9[U-:A@U<ʠs~hb T:fOuGGF @`O/γ^YKDǬMkL`&tľ;狫\Lvwek>nڝN'|&'ެg_+P)C. Gnn9a@d>-_z{E6Y]?PSWA=& VONcv_=wqH/Q#hlݫ2bD*ZgPa~ݭ9Y[89J?DSrm;@9_kPL0뽗qr08"F3Dm-莼V?o#@/eKb ] PW .5XpԀ|P)7W:dRV(=7wrmu~D]1!cԨwJ"N|RK*)o{1JQlY$_ Q.>[t RKAZPF@cZOUD_lCSJr&ʏ36HۋGEr 9,eLkm!O>R~SjN:F6ⶏx/5TOf5%u!m:vKW/l?ݺzŬS1H%?ɗ5h$O/1GK}LtUd7.;f6~Fc<>PqXOxskU6=Y7"}q 94 83dIl֯YJJZs h7 4GIymEoYh X Qa~8*&s\IA3nlJˍƑ-Z-D"cҵnKS t ?.RX\ՑRD[_LWcاIL xo.>t],.m< PJS_%"Be1%> 8 zeroo1%kM8@:@*8zn(ń*FMF;#$Yδ{cQ1ͼ2%cϑD^&oc4A+I۶dw]mFeD2>aoL\ƷW4Z|[U@qEj'sV#y2N41kA75hͭbsFi4{-JnT?bi齏;*+=CnTZAWIuJ&|sհ>r6J)1CK3eEst1Fla;_|N~F%1fyds]V5wY_*of)/1΂>]º;jzk1~KjNeGH*MwTYO[\ȑiknyE/yE~1:Q!Yؠ_R:+A>~>T`  ΓkmGbLw@tuÃkq+rcҔڲ_&ek޾)}""P{D9ipOu&+.vj P9S't mVxYv]7$!5E\Ie . AKkm oy9qD>}&*j\CQ=!_$jhȍ~zMnk,9GILOD\` rH,(P"0jb[t&Hv"AUr7k1.kFj-e(r̃z=FJb#rYFF2#% ie Rޡ+DL2LcNiER:'2AQ3S56XiyC֥K17Ƒ a>؛ kLdoQx Z^ң.˄Ӊ{(6PG>{|!/V%oz<&~C7 ""E8- ?, |4o퇫3 BL|2w|p{ GɖVT!vrsr0Px *t㞗5&\0Fnȑ:8BD+ϱp\,X{T J -V ttfSMez#*%wPf"E_ꍅ=LSvqlUJjV#T}٣@IwI|}BSZXS:.]b97eIP}^žjAO'ԺB̸.B8'D3[b<^QzN , j{k#%g)pԒIgRIc$Z5IBWd y@OYʤާ{iT9L2\*˕Ѥ>á ܪ{ 8'πpNJ?GD@E䎎q,'z! g4fGF*L/zf%=hc` m*lj4eqe:@˓eZ 2AA5J'2x*da;q穐[cF>,X!M3p (NrR)Ie.$ h>`iޫaƩ)W'"ZWNe;B >5^=΁ :[%y_n^g(}{قxPUgTDu!|A@"5Bћ@XOՄ#?4{hZb;/xWY5n_U‚n:[ץ^] 1bOG~Ѧe~o|'0Ǜ<|1dMa_~\*#'an=Y(nvkǜ"I11>a擻y_|+]^.71da'Um<3[.ٝj^[$}w/閌AvL5ԋīkfS$K!%re YwR/y)w~0M,5s6jZB;/Y-X}*o8T_F kteGi[g.a63^Ln|9; #Q6듻PT3y[K @ZʗJ"i*-`vꭣ`4wEYzu 2G_LF ƒOܦsE>x!Rbp[S̾|?wZ~.xW{䟷rtv|u޹q"5Á֔ UPT$#v J(s20ԥ(03fV>NEx|y}Pm{>dž]}(5:*(ObtTCy:Ә+mn+\u帴O:GOVTM w#[ y͠^)9MwGk(ƨoG7f=׮aW<)P9jbLr'zt{עs\)$FEb 0W֡h c/뎸S IIT)Lo"e4ns'!0y՟F+0Fe$Ù /!7*w9԰Ŵc+(j#&CZJlSZ 3#Y Y zMrC  F JFtf5v0$#pAVˇ̭_Nܧt/{d)_av1vO^=SKxuRs;=4q+?_ˠ%AiO*`h*z4)Lc$a$-5#+Q<}CDPRCOw;Hda #5 (3q 6ex>+Y~}0+OIS ӵ7u?/t"zyu{#Ka^teGkٳ}yU M>f3|%f{|31P ΒroQϐfYkp蔊(wvk5_eiJtjU"!R;lbRN^s3p?-'ޕ?Y\#&C m_=M̎y;Wv1;OШERλS]y ڒX`/m5wޥ;OQfZd1Huf,zQh),*53T8(q{&k) ZD:,q3Ip&Z@Ҟ|smT }v,wNH|V%HSJWŘXؑ+: lHm*:T0cUL8B![a aHvR }n]jqs`.1rb)xj۾[+yVhM){r11>R1YAym_uݚoDlJFڔ!.~6$c#- J;0#+}-nC,\\?\ޘ[i@0ZNW33_V RIU@]q7OsV޾܊Nǟno(oQE@Nڀ*J~V& Vt c]JoQEn,lG&i/E->ߎgyC#WnI#n%ğr'7D!lM?%a 1 IfP8)wr ߂kW,T_o ;w xv~ζƷ3759"K)yU0e.CX-厴!…crd|kF?d\};;kwIXsE<BCb)r,kEʇ~k )z$>"J;x!C%X}]:LHﴥhϸLwOR)I ^R˘. 딵Jd%L_o"8fXcv`T6]R q`8[҈J==EcT0H ZC^sΛm}G a\WֈB{G}\#H߱e؆62#Gݑ2FQqKÝ2&jOw{hRӝ+pM2&hy{{!cR0dbUߞGVs`NKЀ|&eSA_W6bc:}x#ny~ݙw˿0һ5a!߸fؔBDvDO!.nP;A[a]; *'nnTG;W:W )4{m xsmHбE5]'/&|&G~8 lF1̮U1m;sgzfָC'*Fr;$Bk35@KMJA1~hvg %tuY h`GVꐏ(wהw:M(Kjb: c{Mv_eG\r"$ !F >h-mʝ%Z!?T xVfX7 ,I^U+0Fe~S0GW2ظrմ>>[3>c~p7xp /77g翛؞/>D1-/yyo㎐{xyD:o>@eRQF^!9gMuG>?' $D!\5rh_9/%1D&;xe1m^.Mj4ȥAO[Cb+Ox?%Lts+f?(oYLp,\`4]5*TxqF~i*.uNq VH/cRJIÄK:;[G#!Ą`0qwPscrJÕ8#W`G$T% zZ3dR%. N,<}!Uy*a]H'C*4ĦJ_Fe ^R5تaLo,dxMrν q G(1uUQ! bk`S ݥ3N(ΈB0Ҟw! śJ^ԪDRl,FIB<.q8e`V*(pS@l-gH ¶n%SR8|@A KTA$U`hM`T-ٻ-\&PGo?X%g|i ױKo+wL&l? tts挀"< 5ǟޞ'"J~p} ysa;f:"$noy/6B%v?}{p/XP=b#1gs]⸥Q5xJ3E"P<}xAԈ#0D3)ne@ޭi$Ar-(DzOgs$FuПw5Лw2@oV$c#W W7 9 SߋgC)3d2CV*Nn-XI cߝD׌es"3AvyI TAsNg}lfBpߪ2_g|h^2MvNk+Rҗ\K_sT&kU ^Jkq BZV(^J)M$*d8SqTeQv~ntPzS:NcE" :t%^zx]eo  e5Ҫ4F)"Q%*,lɑu.IUVڔ,nGY?MWR*wn y31bTc&^3)(Fl0W_.,*c\TQR*5c%cIi4Z8b*`eXVa|҅[FbIQ2 S:֮F,+L Sb *7d Z:X+N8X9`SP<15&GE<@U1wS/%u~8)JA@D tŅQ{MU0i4)ON)JR:"uuwn& E>l>~} u0m|'rVrVOݘ LOۉ^y%7(凪 ]U ylMךpR/k#WSmɟ* XbȔdCTW0Tzgy $b:]wtT& 26~gƁr+;o_ߩJ#E{{~ /q;pi*i<fJgtq (#x`;E# vO9vwĴ~ёZ^&d`>PT/orRƟF)jGA] e7r/Q'\瞢ʇ+ggW_R$i %/FZ(geinnK7#VŲ 5Ї?m84bg|HփsNsuNxzmDfTDٱ2ɒvD}=y*|_rYԟë7mOgG}QL[&=G7}wc&黇Xlv?8?hD^()gwŸ043oN;z{=?7'*럯l,+J7,|d|[ݗwF&&(?4cAFbf%aHadъUB8Q"lR,_+K@UjA_?n+W6_0EbH9SHdHYjvfbUbW"8/ў\`RU0Fu}]gΒĕRIވ-eQtg7F%&AqtʙY3uZy:VTb_B[|Ba)ta$VZ`*NLS~YC9<5c?cyQ֘V-y*ubf3 +1`qґRJ(JBLU 0 XKY"WZdP,} ĥDugUa Riޞ;,j!2>w'C_(=~G$Иc#x=@!&lK Xx{Mb0g ̥X(.̣w[sj}.)f_a'iŰ3; IV.4~!2j?riBVAjsoӂwXk.ۻ !%TX"7H!:p۶Q1a)dJJob In(1fx:h1q˰&{6?+f}Spْ\#b|;ӄnv47{6Qo3@zbE%خt0})e˶dQ(AReɣsK%=齣{E}f]Tj :{:QvˉW}hY|Y6qEyNmhok@c'LU8[*n=vJ7!/69,ju[`wshԲšR,sVRO+ >fah˚pw]6W1wj>#G"!HV>N^KA|?$aoҖ߇+vV]o:}}~=밅̹LՆ.eY,{TeT Pux*@+3KP@0NoHamPH@@ހta Ba,e OnNm 3`j|m s=ް7Z`#l.d\^] аw>Ps ҐΧ uޟI- :bFu:gU-2  UKW3FPo}F+άukCCsҩ9p;sFre{n3XO=ӓ.!Gֆ|*X|  AA[`Qe%K*"hJSJh抟GW:b FؕԎ)VXA)4nԕ֐ruXv0bIqz/v4_4eϤLW]KK 褶P8+p"UDLj)ŠN4i p 2}RDva1B0N)1v6GmJ$ 錢؍@i3[NPf([猦1ݝYU[♚O6q/"qK:s+<# CA.6OO8'niNC+QR9yνݨ@ x2Gw~tf.κ W| <v2y7~VqV@&+Y2z23T!Cp G0CLe4*7?hsܭ,v-V-Ƿ4!) q%pw!L(NB h.j=[*VS'42` DLѝ vo-ޒg r Ff4]wkCtqYљm%ա2NK}/?ҧW6s%%t_?[NC5Gw|ysX>'#BEH8 OEO/ѶzA+:ss8 uOMʏ]]hAw7]9bg>|l^9R\rwID NsXrEˋF霹û_,;ךEAQtW!QEefPx􆧯׎Aqt0(7{^9R/|K~I.5T IBfgFLmso!K]aнA!6B=$O^)e9"M}7E C"帣mGZY@IM #Rir5S^;\r RueYAGep*K|"]^$F$BB0PIbXSDUqEI"D Aq'^a;o}#g]j 8dZ`+%&t0{!0P#Xc(DZkmc"<ؖ8Jo3Tj%Ba_ \N>NaTCF_ !߹z f1T*7ASʝtDžZf61 {Pl1nQ[&Ejdu fqZx݇vziȚF6sZmҨn>\#ftI!Sr^˾KGJ8}C8Dzse5O|RZ6dO^v+<4d~gD%`ߍ~u}n tv$둰 UKri}!؇3[vTP(ŘN^Vǧz gx>:(}<=xsws*dN>MbA| FF"Wݥ1|D5b% iaT[cf6~:) MQB qfsI6y?܇wo~MΓjVtzۯf^'!&teSA06TTNr.M\R9FNyFJN!4LЉ"dW@}0%$.gGڗM^0Ej%֛GՂthG-,z`#4뿿~jy*<(}i Yq+ ;l U PS"Rτa q5 ԣk i GC ⒇F|&\dP:?''8͂Gڔ@hYei+o2Q8ٞRY}4"-Kx+~V&m{XnH?>tI7<8R7eЏgSY.Gc](":XNZx<朢!P2ң+64k"=p'\MGE X2*u2w ߵa/>4Q1Aq&qAF|ܕ6bCsY9ތ绊lH5*caEDhМH8PjIIH )R:8:ƘrC DDI@,U)2P-Z|/zj3òΥNhUu.?QC\xD'=@Om,;34,1=kbqkY[Qiss:݅xVLYn4Tʩo'aa:QJdRYb\Iw12%y0._-kne`ZUk`4:':8INx_N$ڲ/PO0\;gσЍ2o0.̼$ي$[q p:|NH|-c]!C.@89Ҝ>l1KmAp7Hip4N ZXfx2Xef#4o1jbp. ۧYm-CgۿΎ-nvNe?m?.ogOJ 9L jD_r@Ĭ|/mlnJ9-sJŸ ЌjJ,s2 vMͼ;ʦg[ӧ1$g24mU;Rnr)CQJG@$y)/Q=sL gɗʏ]¨L_kϱ٦T~a9Y>ّ=+n[)7O(0kDoHz2nS8ƏtdT0hπ0 i/3|ۘx|Fdd`J9_9gT38 HNu0TY\Bŗ{ƞ@9aСzjhj_w VvYlZ8(?wt6t6v*.&Bh" t J@u,Ha,E(Z4 Sb"AD!gMPuYk @ϓ.=7?óKv",0@VSiTT/[P."&5qB0C #G ! Wiʰ)E%I䥤Ad-1z..آ' ౒ &SL)l?m1 $q@}гg^R>K8ޣ5NfI8 -@2F$RĪ? b"h4HRB,0F%4uncĻ?O%ԏOji75ٙ@'NyV̖jp鬙n^ %:,2I wܬ׳O˅3~N-}6n^\m%VyVQʁlн O uF[!t֬]͏ُDܼC`ܟށKj3=EY{ !jYB{ %@ly(q#0cfYCRq_(Ĝ> wpᑚ8R,4JJ( `bb~Yk+2Ei`)JHRjQR֫:C6%-Rktxѻx'R.A*t5ZgJ]A QGKq+!tjDFqMk#w4@w6ǵ#׉ > 5C8),rMT1 7#9]!n pC30=ѯlZQ,Ѹ;!-pfC(F>^htL\D^B!?z!JAG.:4ʌW*I蘭oeIwgh ҠGgO1:af<4ߑ Ao poeeFf)w3v?яpqAim3;G} j{!=6%`c( fy|IuFmvr(%HDR[2\O ."I# VsC aI! 4 L+ # 2itVdZ(waq\UAd(!#2"L@+MBKh )q8У<+(q\CP@rl[~zuc?z߻;;~~Z.4pvoO Ҫ,h}SM_P,3Gͱp%V#+tb0ƒK`"HRviwqDZ%FX33"ͰV9.Ő5 0 @QHbM؅/-Hr""FX>džtߟIr;RCaevAZ~-i{&/G/j&QMF~2:h` AUewקx|? &x2%RF>ӷ;W/-.^%w竰\qY-I_\Dk,,nX ~[] N>nsMnZtvۆEL;KW?{!7_ܱ}Ň#~{O CKGpPc^$S mᴷ0Ɔ/+ 5^'rޞV,X6:[3]d`e,) H@pc&c*@,aDF1bg)E:t.'Ӆg tvZﵠ1AXNJEPKElr,pd06f*.&P5c-j/f$_I݄`TG-:F#b_:T"3C1'8P:"Ĝ+eq,N)$e(ؾ)l2`{FŔs//NlPX%/Y)4`Jl'y41ZGQ5Jg˥Q}jpᏥcqZ0W\9MY s3-PF)PkW5^{|+\Ux;`/44sY;e> ’(xo}B%)$w*PK Spl(P\BR-Y$Z C\-/&ܫiypA8 P.Af~!ׄ:@7JSApA[ \iD@&:gdl)ѹ G9.\S%cwl0/?4^.qVg,yw\F>xςQ/sXYOLR9D$PI+v×+@en*qWMP J3{2t5W79c S@j%vlRLunPQm5Iql8\Fp(n-‘ʿ`V 59vP).4`!+6ֈ[9` ,:֤pHILvןs }x vD0Dq0yƄ3±(%Q-28VFłj4Ji-xZN$K"ڎ o@d*)&_uQ"![4R1ax,%^Hƨwq23jX!ϐJ|xqT(q(XZc H:DA j\ YBfBbBBUj͡AUH˝VX 1 ">y+%1wĄDρ7*$b ղ-$ˇ\^-2 aK . #B$|fVS$hJgzv[dcK'q#mw [̂"B,E 3. pP&zՎdžNՂLɛʂ^2ҏ7KUJ +soxFp3SdUaA")-J,)*Fc`VS\G<՚cjS Ppƒ!<#N'Ξ?t41rL`0eБ`-;>o,x2)e63fz"]<ȟb{ >H.6cV5j={Lt/۽ 0 Ɋw}ecg\0T.>_{W3Á}I&!sUw"_{6 n3#$qs]2qCQJvG!)qi;cuKYۏ7hMZn(~ONMc3왾i0Y[uf9 ,O1դSjfS^X8tÞ *7L݂0MоL߀eTl!VURi`|"$XYܨ>/2Xi8`8 (7"X]nzsEHHs;cջc4P!r;9vm79KCXUgѿ!rBY  LVzSVKKbfǵ9.jAq9;a$!vWmJXu\R.{DDc!xZu\vXt\vFx4JY.d;<;(!DGWMB2Rvwަkz0j.$N] _@$q[ o3{>7{rG , JMj*fsvZ7.j/|Ӣ E^.]hڂuNPbnL+ӘDSecژ KbiRKr m.=Vo~xIGݎ:m mz ҍjw^\>^<x(Y - c͒m K}WOFXcql1)1Ť*O6l}2y ~|F]46C)v:8U)GC>\iS"X ;5aE,'YO|[ bI4<̹`4];>V+ׂ7]'"}xCQSd@Ow~ȷ*gMԆFu~xr6L:߅eGt{0l {6 >out[x. ϻǏ3$oO7`Fߦ?R^{ozů>IvHgN;I}W^yy^;y|2^o`ANq1*mM)T 0O8ݸtC4d:;GWȵI3|;5oj7nnuvO^ND_c'#ИeI:M6RԌ`q(NN}I V埭h~@s ۯjЧ.M}ToOo{S_ M[< S`s!qNrMY73ڝʴvz Qo[(Ϯ]||=p] /3`fF]L'%PŖ$?y s[y9\'^v&~1d^y';#J ϛ=q'~uwۉM/q'|YάBGgu`O$Lߞ~q4x+dCE:P&Yj 9^CnfA~^,kulqL#z׌&HC{1/@489?#N؆l|?y nC>6?zٍӯnt=p2#eo[Uu~{뎻??};R[9˗aoFp.]e?zHQMg{@#F 4dO_ oqǔwG?CY}Ŵ^oeybM/L9L뀟MY~ۏ_њNVrE;Nڦi~%M[Mpǧ689a%Nط,[L:õ^%u*:M *yc *>ps.1 \a:uwz;up(xc0,wWk[YNʰc5yw[GEHf<+(l+ѬOvlx?gsوwߠD2A9+a/va͍WNjȬ?K䶝O^,T+gSXo{%n+dHG1&"H0ym`)[KV-Q'KH(Wp6.g<ܞ7B<"~NmضS|Ç:'X/RaXzskt67Pǿuq'瘚 gձF̒t|$:c߹~sq:$$~ W_XOdWL[/c3h)Y,lJ*{k/o+H4ɭ) 'R@܋Z۷O}f^Y_wSܠS팑fF&nȾq٭H@'Jk=ng71a[p`WOlIt5zX6_7tbۋWƈ5E&^!9``_ -Hkil2`J+cD3)EAEc-M( I6'\H8BKz\RYHi,2ޏ&b}5+#V: ^^lqx%ˉ`ZQ߽Hk׊^TGm<#ZKP[CYKwVt[/4V0yif`<'O؝z{ƪFw0{ mv#}/ؖ$ψ7|ENgR ׃88Ӎǹyz 'j p[q2fqѨWP`ST{n(ZkhI=w9͋'6숤I#‵Y%z9RjufhץCLG36)K3E}.gBwnPmX[{0g;mH ټ(V]MjX[0w ajKf;ɶ+8pm6XB+V\v VS[YTʐ X@ƞ(k* +b%:o0`E <2sِʹRɞө&Af#$5rJɕ>O:/4g/DMKzդ:1GP %r4a*Ŝ̽jd)Hf*O)ѪX2&P-Sƞ%``cd2>) `QaQlSSpmSf#vEJVh]0. ֋(^%S}-AR",w6F~O3GǨ~K5_UsՔww8/vf4W+}V fk+E+=F]Ykq[mVu7RHiZYj@ W+}VJ+>NMfVJ+PϭW+}lVjW|^_M5A ȫ>>+5UA VM/3cݛW7uxz[xWE[;dbn> X7Xg7 aV=|;|׿Tf";v 'pm3ёN͚uM vݺ-o)K`B?ȲUU fcv|] &ejQ@x& &qfx& x$BӫfT9ڮیjBs׵l9mpfu ͦöK3HSgxcAz RQI?^d*,8ƩL܆3x ,'kF:t U&XH*9#K=bDS-K^ uЙj,j]QeeoG`Įh?EF[gN>fTo~~}Zkup?;ֿQ;{ѡ|y%ӏv;2Jk/ ŴoP# a֮{F CIY:6ͷSցE8=f_t`-̑5G+Wz[>>`"rt6?@fa"f|=/X|j6^rzjLm6Skz`mΌm 8jvy5'k1zbt5YZ(hT77ht5ih7Y8* k9Xr6I?uJ*dX #=;4l@_T!.|w+_o+~?Lh/߄T֥ sRs/1vOЈƃoM H}Y#7mM?@-xş^wZVGY Y7'lDRX+@wI}޼S(&Nџ^_2h\bh6SBXiN2M;/7[Mg})᭄Z?݇?Wj6N%eDAlCl?_‡go^gW?egw;y陇y.'B.pbbP>|s0ob*YUt$-xQЅ|g42;r43Yw@ݑ@ hc]I|s0ͭҖb}RWeRUǙoaQ^33_=Tٴ jJ]H|32B b!3#Y'moev S-l"Y1AC-YPB˺WZ1E[vڶ U>uű޽9yN\5T_T/:##$3w)I[TQ F]0^g(ѡqǽK 5VFĎ3>W)K9Ol̹{[~ ^cǷ7?My|Wos~4?I{&iHlۿ|B2137ݥwSxF֒~cy>?@kӃm)Nљ)btNWpuk#)$w3| CV^,S^cG27;'bS$u1tHk:y68Kq]?Tê?&|> T}VUWe)w::D5DKfRFʱxC"~̞өv$:, Rm͜Ɔ&[f@<+%r(; T|>d*8CPeujBdк,[D*(n1Ea0]Ġc8~kxjKX\Ik.9q6C!` NWIX ZnNREm]>,+ AHV 8 PjILrNe%gZҗԥ24xl_m˓D$}=!%0zl^aݍFwk<6DQ'2KF!K6 ˆ/u\;G*7a')T3*aO{ 3㉩#c5N tJ  FO7GJT bOV")ÍJE9\ц[B%Vz6^jWE\ uM!fL}pf m0(A^"rgZw¡SBU,0uhtAQ ZKFت: (N0r΂r8P[ zPu!GB }gO ɥrd$]mYaC/\bWV1\U!\^Z?^Ub* 8eμ1GqM&>{^S*`sJ4TVqYzDssgP2TϠ5KtGi'`U|x{x>%9=mƟzw.˧9t9T6(Dw4G鎮4|0$ KoZ:fڼ#20 8KT̺ !. $Y/L@m9A:MΈq,2 fXk4o/R-JeXZY&ڻ}hI Y |cҿ񇈊4OL37-Bi G? ZH$lzNzB#:ƩҔ(|DZtVNsӍ{t3Fm3gߝm#?o]6QK; mdo/U~@/\+C9zsiOw0f6V_pq_B;#dPs`2}XT/M擭OAQݚ;'2 ơDQq"Merxm4qBUBzcNm1Err椨t/ϗ "xTDP"kp}-MMj?+ ͞NX4T(ݳ8߷aD%%boH NB`4Ք` jx_b&?) H^*#N.V?ۛLNtiMC9Yu' As+<(bKJ["-D4H||VY.WY`QIlCd"eNfh f ,|6Q߸5tl{#jޑ۠4#)OA7ڠoNmPש -R-ҷuFF2иM1 }R uݤ"ZOW[ BA‹1B=/g0$IY w:eDjXb1$B1+J BZIx( XȐQ|rm-g5.*cuNP{sc…umc ~ЎD$.GbS7*is]ǀ2  g FzZ֖M|%V}biC!@m )c Jsg64P)E# bN>wGRa@Hk|YņtZ308}ws +i5Z**Zۤ3#RU8y]Ӧpt av.goů}ՕMoH 4 YzO4sP莐^ 7NWYyNGӮNP4:I[Ei,!b[x]n'鹒^(.(.gī-Z2AmJD$8,&q޳p(΄p}#]D x3|}o+J>~Sȁ5W#wT{%U(S݌Y>s< |} kDf.%jv\#>|^ D\T ЯDA9۲Ddxz<( Ђģ6HdXn ȭ^AT$kF]J9Y _*$XzfJagR0N:]О"LjQ_1 :hIZJ~k_PE0˭N,Ўw a= z9"1 _Ĩ(mtI8$^'BIq}KM9~Mr^I8@;t͚+^b@ UxH ]: bqC?hz As=r[)vL !p~W\WV%RiDvcaJ^Rjz*iW( g"kit"'{F^C"oSLHս- D)U Nƻ5i(ʒ9mGC@ni/ V?ej&7s`< w+w;=]Ɲ?] sN})9B[ g(,z|Pyg!JP!k)H׆s2ku~>~6k£/:ev/{mNiQs/e:/rBSK2?f U>KhI|CKVC^jvMOaB`Jj\`T$R KD۱@Z]q^ĺg;weXKdFE^b#1D&r`J`lL)Q aQp{MIB0M,[wC3:zEM)anJ_~n[K*6BlR0." #em-U6Z Cej!b,w) 7 )B#f6"G eńn)e@K`yI K.rxZw\v+Y>w렂hJ. ޞ!ٵ88[)Y"vv5_{h7-99 ,N%t/X3ºSȗPU2B)r *022}Jkl6?R}濢:~c{Ə{ˆe2_rgYŇL,].>k9k,МoπiQ~ٹ q'[֞-B0lTsjg׎Bȗ~w2~LcLY  z+R[.Ҍ)9TcOI߫؁} nps]nZ{55>7>7 Rq\c '85R#J6ؠhun^)Gm "T;vDz* Nv1~u`.Bٵ E;R>C$_L/?t` zl-JN޿k+sRG;55ϖP;GޑM|2z`osfhصyݐ+͵r5A3=G Fqv C2fDQMF'z'LfArȤ GO׿L_qȉ]?D̃V-ܦ|-{џw7_=}2.1-GI_{yOvA ->NmRNSJq<s:_ 1Tcu vY۱`Ӄ]Ւtގ6w}e?ǟQ+g<%ϑh0zuc>d_κ1D 9jD8(s$̜n1hP{v\7 VY2FI? SFOVZѹaT > Ewϧ; fWq}y]K>n!h/>|cޤz=~׷ !3~:;_? ǥOI/cIyy<\yFɸT(Sٸ)_Fյc1~P5sǝ[ӻR6,硆K\4D`aon8_n^)&)y}W=EJ=đaOUq6_ًoMZP7\>.yL7re!scO;;E4Z\RuE bKNGSdL!¢p[, .%32ZnahmvkuL\pZq`!t<*G\g>y\>q4 ?DyB@Qh ]q~Kru"V$xr1FF\D,E;iDW0FQpՀo=03 $q#Q'P"+AR+4j|RSa@BRshBdR G' I,H,T*RFKEI,H#p )F]50SVҀBe؞p&" t+-Pgo)[-`tNr;S:W%z*M5%pEUѶ \CyJu}j)B>\ eA&[KZlfb{u`ӺѠvXm} Ff}ԡs3!v]D>dqEbޑxC=^r&.aTvYY[=meT&< O}J1Ih.{0)? y;NW >X Z=#m ~KA+2zCTWS<JH TP8%tXnͫG)6$:mX_Vk)YJo w[#(mgơqh 7N 6Ӈ!fA A䎱\LxR*$ŷhڑd:)c:aeFk#`رuaN:h@Lb }6,d+4Z"~}?CxY0Mݼ JN?K\fۍvڕUm] o<,WUgU !7Un|w}U_q.6={PK{ӵl,uMrL<xp]DDPAǐFcϔ> ;<m?K] 1g,lCGFWc>1MWIZG7^Ip| ' ͞Ht b*PjHsSm|ՙc+#{7'w?'.GmeIkW WYH& ^S|K/.Uqéj7.lbcenkl~yNmB'ڥ8WodmcD,x}ۅ^E{.٧wՇpT "+}T+__|\K[\(fHc)}!P2}t_viqw$ Hњ ˬuh$;j;t#$:$$0LMm7EBdd)ݢ/oͩePq:Q['P}GكK:6 :)!D?v}Y#pZ#DWz,M/?grVWfЍe tktBR)9Mg .1ۅ%}Gu Ywڃ`EF8'C"ySВ'm)#y.8< D ͡Jpq2UpnX>ҀD*F4Ho|Y^ܛY'oW]ZKhDRFo"jC{>9۽!#P#hA O%ӷ$KTT(.AhQ_g ~B0Aj`,T\Y}t97`4 b@CU"7K9V'Y$MfJ>]d=U0OyՕy3U.QM\E,րOZg/2}3&Weҝ39!EB/*H'TGYy"\Gyg6z k LHYɴiKF܈gcRH2T:0MS4%#tg~·l -Tr e,5y@I": NI;$}tn!\~`mwCTW~,ܬǪ{KvnjuNww?5Wq+ݫ?qifygPZ5!,]5N?ԍkv}]Hj_%%@C&v뺰:I V̀Dj5Ʊ2cugUp<O^prcv8P^U QMFOU0[ ftuF5mꪵ}\VQ@4,tǮ9AOJ%{P)"FKK#ӺoyCF mD@i\QOR‡m[/]]sU;kĿG 0Fcg"Rј4ʤ,3bȝ3>q+5=I{EG6+xd>%Ru_զ/39>^}k]6ąfVen/^2![~^@Awl ȃi\h<{jZи^RW6%qy3a5y^4oVڳ..PnzҐ/\Eapk#f`ݺʠDuu;޾ 1Fn-n}htuS NuAt}GvSэ[gFK[N\l4= ..aOp(P 'Rv_ Yx?X}K,Aѩ{4SoW\|,I F gJX]9K8M2+g/4!~"-)C{H0֒<Dx\>P{r׼lm\&|* f{"VbgP-J$]cwA HQVh)}$@z@Fk12^)P=* Ejm9JD2IٻK"Y+6mP!]o$7rW}7#X|&r3$60M5YyqA=RkEkZw__շDHrsɿ Tt]El h/a46s`EcF{#6Nn|RPૢKùT.ĊAm0;8$m+ٶ v`2L+KϨt$nDee,6 x&KV]l WC,b&WA) Ԏu X]9n^9 n0&c1Ś$`( t}D u, `R|}8$WѱϬ{:NtI=ªoN!<2sL̎t>o;pHFƐn?JfKD-ҜCbTrik~ &Cl{1E01Ok{#&SOoqi˂ 5.TvZةJEl,vj^Ig[ȌN:watt S , p??wI{PN*}0+IA\ƽfV6ZA]'c6J=<%~o˽JԲy<[ 0܊6cvf ze\ԶTeUN"Ȓ$Gl[zr%x7CO@cBMߓ.A% J)aªn^)kQ+(*" 4QS93n8.?Bﭓ=oXt$IHA6#fK͇ӈ(#:|"4K:TZ19%nh852Ҿ B902'i\&f$u\#DM߾zL"'rp+\S|Ur/p\U!VkҮvo3RԲC$P1EOU*LNT2 +Ƒwe@!Z d*Wb4i֛9u`ɒ; ^Rkfѫ ka*M,|ݷl]jTBHVI'=rf%`R'R$K- Sg9J $OInK)s(8V.vƥJ&J;A0~ܥ[U1·? 0p:T̠P>'NJǑÂ(HiX*'xl_jKBMB<8Yğ= UC;HiY?7gRh3PWf_Ɵ}YkR4dSs=;-i] KtB cf^:)o8T?(%x]Ew0,B9L9rΖT"Ѥb2f4{<Ť@R8aXsisHF Sj\lMМɀ&=]*8X4#PJqZ&,K晵FN(K^[QY xel.*Xl+Ȗyi^cvuMpi.YW7YWBWUE, Yɔjόs$~ft8&_Z*"8f\TpOaˉcRrό2ьrᏍ*!#ΊjY&@ ԍHB헅r}>'ҭx]c KO_7jn^툩Ts@#hh%@#E%  ."]pD@+u~˜C;Ì"J1aǮ,S˯(5˧/efl&{!%e3ŤA&TZO=Όש 0yZH0\2>)=I>Q[Km[fIs K1\asc|{s,wۛ{^i/O/_Cs5a).٘XUnЖ ]8m A1(OZ?7W^dfNr %3\Odqa+yezwB ':c%QZ]Q|X83ʙEF̶^X3vs͔/4p*1'atpF"wGqnЅm!{z2xɖҽqN#y/=NR{PB{2@KDkqAM,$`" wv# y52~,MLcLWȤ4ĝ\#PQU)5*%eWNW{.9%˂, 㨠k eyM*7$"2PȂPv0[ԞM4RFjs+ Fd,$&x+ &O'Ip9I򥛋_.oeӦl۫Xsͨq .>߻5f~}9ۿ~\CoPŗT[<&i:H֜ү>@n_a'F޳vUɯPO{oEa8^tCFf ѭ R\7r dy#aֽebtCqS?ۏna8)9GSoֽ&ybtCqmS\cѼByFՌ~1A$ӽeb0c'NTʑ[3~x@&R#K%T'_ĘҝcL@dJvRG^ _z'w44ރu%7AlTbӽq#axW{;;E84?3˦ZOi._n V"C1ܪ`a!KT68ʖ̄M G2Rރ(+0%:6}gq76ʵћVc؟Ǥ8$03I ;;Vb,XH4̓x\CX.ױP} Bp,xqt{|f<{YT}>Oa%ת;"҆J̞]`gv_WRO|^l^i9J7dTZHB;O8],*;/5gTXUqZP~V;f*uDy {PTBnԝ>` _؉YsCVmÜЄ+c&5&i5t=5 δ?A 7_>l/y&'oޤo~NM^Wv:X8GY;^Z\@CsA9m/͠3T vHL7M!1r0Ȱt7}ʳ!Mg:(/>֣:#;9oш(+R_F=fFVaxrHov HګpFA L:[J QciщU:ғE/TN iyqԏ1tuFOE4tsN{uvQéf.H<{Khiߟ,Clmc!7 kRѪ@YhҗZJ +3F K7\'Zq P-K4Es|}[@@MGs9bЙ #H5t׾gj{Ƒ_1psɪ"bp;RK8dv8v-b,fZ*S,Ūf?9 "#dXЌ*Np Al%wQz&zFj! yN XF,fFj2TYNpcZYָۙ(jڮD-TTLm,"H:R?4JmJuaKjj%%hwmUl$J@hj(Z#0ecpq\F2z_37̀{9IJ&9ݗsJ@8s H#;^)l)Am`zh?V8[Z^JR-7O>7b;@uG)RG؁-%OFX,9j#wC!Zuy$o!rJ_8P^%U {|uh;&"_֬ĈTeʦ3cLeS>hSW6v tV58!P܆U)ddU@i^M2-yB#|am xa|6dŪIkO|%{!ѯEùz#0X(4 ʽN+(ytBaV&#biI "vONyn6mLQ.&/3R UL"&],4-&!/+AZeWK6ZU;µ8v|uwJMN갓9>!/(|Ω YųW{R\ۯϿ2U4XbV`q3a )ĵ1w 4`myK"^9飾,2 qHgRvq}1bPv9ym}XVX uXk Ke[U @"!]ۚj#j#zvT/Qu#1 A77uy':=:.kkEHR*nG(AX45RG0a_D^ZEقg2, toqA)e$:ez*Fg)kܼdTh*+z[7~^?`Ff[{~mO5K /m5f8MYOڅu6OM|иIV"4x~r-)%^ԬDݣŔg*OVT'/:1x~FSFs.kߴgʝej@KҗuMPY D\z}۵։tȴ38nýwi׊.rq0dso~37˯P3DJ"mYN;w3L_}&p ZSLQ6qGٱ?(Mv`xpS{'= Kvhxu>lY7#he"M+5LH)2vDlb]M` hޏC[Òzl~wtG릌Z2BZ*'Q&$Դ)-)V|IJ$ ,2M$"5qJA3JYK'"심.K52+,2MF`aFkڪ:2vJJ'qRjS7h32/"6=_V7:l8O%^߂pNO;+ڙN{rE2ztcL&s cɽ*#^O>2#lӭj[h3Y~cu[}X?OQ:mzpiLwP %O߆&ɨI}?ۏk˞']3˯3 g5Hܞm,@;+x Ұ9ɓǂBP3 ʫE ~Gm/]E?]uYEVFȮ2˃7|'[mM46~{Fv/NnHɿӍ;<.*Iv}AYǏOa}: LGsԍr4[.N5]ȷ T||gLKS34fھG0ww_^+-Ƅr|^l/SVYê AűHwly|p׷<M༘%6!7 K c׶ImU6iRRQdHP6([d2ARɷڞm HQʕ%yM WLmWJڦN*XS`jE(AU ҲZV!gmA괃^o/BB+Mqg%j7Du ZY7.6iVTK¶mkpNAxy> hwXm-ܖTyZtJ+g+-g+=G+UgVSVSZ!}sR0~V Ǜg)WRؠg9G/+ERw(JI})5 aK[85vltMzi;*#W4U[P5T y5]noa+ԝXڝO]i4Z,ڦfK`a׺!AӀь nPD(RKOBh`> X%Y ZJZԠZ+'+mhrCjAxnRXSj{Xm%Ҽڞs8*O+U lZE󯗫P1e^?)vWmhEkg[MVQ8frz}WAVC"/FVt  r"I2%*]k*lt5B6^fQgSLSM kґϻcG~`=#?þbD`D*hG89=N-Dr(Cz| Dwq'DU+KaX? ]%PqI@a8Y@Pط0kevvR Xl:k(rH@SxG;FFcƝS4ǏK6*-э/)sDo>#@w6k%XD% \Nz5 )V0Bԡ*qh4hV#Ƅ4)wh-)!WT 슡",C`Q}Rsξ#*jUIkRT /n Ukَl B ePMȞiUm-f}/{KmgeWٚv]5w>O&#x"c_z՘+mΉ pTW6Msm}?iC>A+E}X o^Alp%=oEl(Mvެ63+ j#@'zކ \4+tr s6)qpys^f XXB-Щn6`4-t5tjrź֬ltmZ*e,yx,֨_nk2l-,+UI7rb!Y׵K l[Yk;V 9{䓊79sW>}"> pg<\߷w_C͗UU+e_?{[peJ]o`5(~_v'Y%sOs~|s]?<˗~.`㋏oR/Ж4o.nx;:L]j}f+CF~sr{ՏkacBV9a~/R#YiOzҗIL3xHMQp<<3<$eVaYJjkŠ{LV JWR[sOR\i:dJ+%%۬&J=W~-0Yܸi磏{T _A| )} M~~/mZH!OKZkO\fG!`P?\c}X0( 8'$%:aQgɜp٭kH,yuIv!baj5p,l䡁Iux]š7YDgrOKLyJnw2-Zv;uFu߳v QY[aubܑxz[J=587q-a9=(* ss(HU>Op7 [z%Iw Sh$=>)Gn|.@CkM= m};/ڥJ n:p=?ZMR\\1ӞT386A0^]u#*fR2.+mmQUʑQ(*f, [z=.}ORVԀA@j"k>rPf"(LyDk!ZU!u~tjGKFcFs[pc">&.R!H*܇&"Y ӌ;ާosǩXEz"ɈgWY»B^Fz>FvMl$Ɯ5NOM\RX(ERxH/j8KOKqh8/KY}iFKOKIy)0VXu^zz^*#WHK[VKOKc_K[|g/=E/a!W5,e7Ҹ+J+q5,4 q5,6J+'y`~KECzM Iѝ8C("q Nd\5i+TGV\%.DŔe!kt NCuţVxnʤT 1u V1 YO`'mXh~FЈ۠T@AqrsWb~ zL~hq?ORօJW|ϩ{alt|g3tiUvs%˹oV8^(=^2 ar6ǖi=q4%П1| tu '9ЦK%>I/$ڞE4-c#ytds\JT ,ґŵٔIͤx`f5tr}ƸƖ PeQ}\2*W5)4ʪpEMYuYg*Jc1Fam6`hȜ$[{I q^I=`5JkYP9L6`Co9 ş\5ޗJy+iƟrKaǟW0"̈́PUJVֺ~Nla DaY ?qIԕQ-E$bg=I(bon$#1SIw{cr` ✃X?Ujƅ‡_+sʀр᪫KI)ħA_z w^DR я{i17O*7R}F=|i[= To#enӽیC*8r\,nagj}8Jʘ ȯlST5a{&y#`ƕ"ݹ" Dzm'b;4 t+,w\͖MBK*7}Fxol{w}-^R^9j4(FK7&ԯ~fe?/N1W;aoOVW0iS~7EI=M=mS\ j0_~xA7^k|_Wwq#?%zf|_%, ,V1p'?DG!9/nޠry֒|p)F?RoG70b:Hn; znhѭ .SF?܊nsCt ur݆\EN!2!E7|GZof_JKNq6xşpvuo_?>"O_-շ7ǩ?#યv_߶=02/,~Yf?s;EZ"\fߐ=o$!`!_˔o??ag_n/\k6-q{hZzHA&D)L̢qGJl1{MVPV^IY4z!$uFc{?zFe>ZTRzUߧ0 8A$H P~.NbЅv>V2ciP mxlm# RVZ9UFc5 Ed'A@u5(L΅V8N) 4gF8 R\c9%])U a- [FrYW:MbXB=D 820&@kak82XT4(>X,nX Sem8O9;mSE)jk+K BW[F)5C 9˚Y,r0$ 7}VRڭfa=󃀉.7Qq"Pb`q;NSQ̗b)QPGx1Y*A9|!f6|"}oz X6$Zt_q  SZ5m⹇/ ߆ yY.\'UQrO7eӃ_:=!J.^s ~pJ0&`Yfov{ gyqp71(WK1_34Л[!mo4VN%J՟[?]}[i[(P 6o74$a@Mó(3&csFaƧwb3> ?4W_VOQƖԟA;Z}mt,65ϔdTMu bU#ES2 Zu8EL}y'|IA(ީ;vҙC |sEsD>s<l=kɞ !N=OO v<ڣoeL `,zu7k>ҋZKeh}zjE %;r1CFdp4pxoHɎ6<䃻hO G?FoE7ao薋A~#% 8ᘩ薟 9>El3($s1qoD!WF5n3tmDF6<䃻OEs=%`xrkq{͇bްSnM_k<2Nnɯfs㐭כ*AHtTMAIhb(Jǥ" 4be\2)Hi [\:tMm'G7A: 5mkp*JI8.2`[0;k+HlHmyuB>I$O@ /CNh޻;IeQ4XɂH`8H0`{/,%WJcC$Α*Ë}śd@5D$e;GF qAI^G  mILJYw.ɒ IZ3PR*\ !ih4&$o3ݛF`ᖨQQ1QD( A{eS%F#s~\ Cp 1-1E Ļ@BnKfUAL@m5gLRFʿ&U9j[XT-#/7&zh ђ LֲE* ?Z蔥qc} ÐzJR ["gHږ $i[&괭0*%YLJYNZ #No/J!(?x"I3[P.DHi^fZ QED~.0P[MP>ZlR5NzkhߺkoDEWٯ).{ӿ1 ) ?S ՂzS'4D#Ml]-, O*o;{0 c[;f(3\X@z7!dH) s~z ?F -f$¨D?8 ՚?޷՚ҧ&aӇ퓣Dښ|W`)Z=]ݥ"% 6fZ}g[+{,aj_`.WHSAVEXL~9jmXX M8D.1ws"+1|sd% mxwRG?NoE7F~`D-e~#IlA<=kC>f8h4a:yHn"Jr DG|.SS%jIża$ 튿.Cj/cU=޲Y;ˤ>N ~zkHsdg?~s!?p0$\ ?O_KUqJ+/97O\=}[;']s}`>NQꚵ00TN@į KKuQ|o~NoԬs''Q\-X# qbJGIJaM)5F.qY)]hkˌҬR*\EQ1eY(ۊ  c FZ9I[]|?qIo~1#8V͊3gYKQHn}³YMJ;)<7CGWeL}9.E|7Ґ :G~unj2ot M~?B{7F yг0=hӣ/3hxg>>~|s?v 2m"}3#^6u};O<>0Dk.^kMrWQǦk.h%]3QIb׌3;l9j!Z2 hLz{=oz'1+ykDy:fw}ypNbAyӍy/3>;]'M隽.R51"2"c)J<aS$k v:Ã$I}"IY+j@n8D e8lUfBaQFC,@3&-CV뭳< z;ABg~b6'7 0IX\d2t@@Q7nc;{Hݖ)=ݚi[X#^YG_b+՘cL2)*Vd\g}3k%y<5ehr87sEWD9Նs\!j"%A9]X.S (1I ӳqS>(Dlva8Kѫ 5 F2^|5+pg˼6Mok_a0ʫonoM6&|46GgM}qag= ?cC&b;wwLx% l1 JٍDOtl l>zt'ܫc/]Rk4SCeY؇`KM:ZlnM\8;^˵34]pT'S;rN тeDa6߭԰z-z' yvCNWZn"=w^3bt, "bC+AzըA,8׼ TOƤ:AANwF7'Q^CxI%Z{=uftZЖY 0@%:VlG*;W^cBfmBk-&݂׫vw^B+T[ gj9qoz.r}9<A[Ⰿ7ud~~G4۟:ٟd)Z:tZ3mzoWTSm@[-F!uS"*{H[1 {cF("B#һDL!UgZeJJ_={=&Jg{uc|5 85&.!(^ ,"@X~5P3HREssg8 (E@ʔb\ +!jkvj@~!WOf)Y.9X S_04uIbwZ gHd1l)Vχ>ob~u[!-Ҟ)C5y?}xg`OUq귑;~A'r~{3Gr/;8"$ަ_} u;C7^,\8:Bg8cpy_&7kpQ@wG@!u>swߠn;uӪx'{Rxnn JܼOq>x[d6Y5fY&ݹHɆKۻ=)e A]QyT|WN Nш.koaJ.w<#I9Ϝ ]H$eh%4'/1籟-̫!o6/&LRNoI oN+5[jY O:#u}^or-ڱ{eŌqv&;رoU׾0jյ/C]6_s 'DN&ᎧOu7\3Ļk0}`|sO#Fzb j٩vU]]%qHv Ҝj,1C`$+W=+0J966q}c; $A4,/⇻pR3;D_lN"OM`\6'R[Pt>_s٨g)ʌ>*( qm!sB;Q}իo=_nCF]{k#| 4iTWVXi5Ө6ǝQNJHJyTz盞df nm?7th)z.gco;_r'^"l=>ٻJ(ѻMl[6|;*k}y$`^1ڹQΧAq!!}zk#Du_4W6n+H)F[AADIyw&"ȕB(\dgkXW:~[^RӅ#Bh9T}%p/U 59ސ K3Z£aW c98P;OaWHprb/ja"S5<>T1/mϗSpJ|7}!132( 2JSi-5i fR"r 0$vnQR:(K EFQ\CFM3Ŕds,DsadrčpFs Yor%m7FC9c7l,P#fhu.JX^ӝç+ZBڻ2ˇUI4urĸ]V,hV_֔PqRאn-A5Ai:ҭCLI[Ɨh5kҭ qA֔!K;H Zu[Ɨkpk򆹞 aoM^X JU}M޺5y23m& B RR Ix) pNh6D A\nR3g`_ ]2];„_bw\5I]$_ӾMsq}UE1RHCccpJɽ7^G_sռ=t{n޳׽r[W8۷p쮤/8pE oGOSgaeI쓱/zyN$U=_N(^]b=TQN#RRE=D9u7n+z!=4潏!NQz2`nM"JzXrDݹG=;\}3z:kOp{#c]I8uġ7:XOkt~3EׄnY!q u[-+6P_sQ\?9lWw3ۏBwl96fuD)[tVd\2&0.k10ѴVi[,*$~.ɝ p5$ IO-fP<ÙPUXIn%g|#"Q5h\0MX_]l5 qՆ4{4$ FݭI/o//4sZgij5)"aVckVz6N9})ͺ>}Y$i]ɚm#$eb'7X'9g6-ŅrܼleE6L9?~i4mQ=ڴndƀlu$zc[x`fѣĬM,uvaSmϞj"q'oϧcZNZe '9$4E<њ@²Q 7'| W@`Nֺ1 d8aw\G =7q5Ho骒:k|=o aE;_3 8(mKЩde5t(m!Mo|i{Tg/ "[݃HK`"J]NaR-aEedhLjV2aE Ïvi?t. 6ʋgoTv#H-‰٫B-Ip&HT->{_\ *{gM'qQމ%!P ="qӴqJ9K Y{?ރuj4Sr+T,KqJJА+-:ǔjN28gX~L%449@+02M5t`ˌmR ,e4j&L΄ExoN>gsA!-RNCy\aǟ><s\P#0c~n~[{a HR; >f _8TW~=V %c `WoǤFwƣCXRv3Xd qo|wgښ6_a!gEjV+kSu*IqJ@B"߷Ie(^:C"C랞)DIY _s:9b̞DIi9N8@X7i8-x͏6oQ7Y5_W-48#.ܫ')7 I<^WziJLB@l-y篲 xA]m;0\q@ "&dV ʡO[pW':}:ɆۧӌZ/GE5DQߡt"wcSd ^+,߻O'/ |o)v9BtQ!/>BtS(M ܮ2\{VA>V<ܨDoTypS^Y7"H*"FQQJ z[lm4y6 6QnRJTj챵c)]Eej捔$$ 9CRy.%4RzRʙr\JT )]E&..RjY~Ij/xn%cR"]&ZBJ uҌj1k*7){RJZH)UnRQk|R8lBJT/ttpRʻiζZ:KWS}YFEm,^!nQjR]zRyT_)j!nQj-P/=n)uv)|'W|\%6@0 d=T7 6ŽbDPFKUT_>0ѨV}# gmطQ(+^XצrVr &JLzDM9Enb7iM?ϒ*=L+e6wP dZ.bLJ<^5+y <⃉M!24T$ <%"pN4#pM_lυYE/%x-8[%Wg[p+S,Uqm1*Jܤfő RJܤ4ZHQK)]c"̤ n2υmtsHg&bY/v2 i.~`,SBp XT/ΦhwϱHa&l{а[6ճ4o8Ilw B;2aY0lZ/[ [٘(𘝊Td N3Ȼbpdur&NWddoN7|=]^`6d_ڝ9c _whm9=;[jЋO@I퀥*eh Cn|Y՗a6^=J{Ќ7&̻6?ZRҼc#LC./[ML/]!dx8`d(-LtVvVgJ}nco M~p:$ջFb,d본7c)_W(=Vyow+h\4(|Յ&3梛,jiJכ,#%i-?ZOE#(V %YQM&o(=kYH)` g2/'Z\K(U:m]}-AS]Iڧ;e޷ZQ׊*\/ԧB/^pR>}3ݾ0n!s$yǘS)$ Zʣ*|#ww^xH12k8IC (%(F4^D !70K=z[a4WZ/Ioi!48X*Z'2kG$"mr8TmPGaE2JƟ˫!0z on}n;1h7 f`eljMKe ^O>T0cp)STeso* (6($U$B!H0CDhb(BMG4!3ahm`qhOy;(v,!YAfd36)VĘ|J`@Ha$@t/&c_*%6)%$Q^uJ1$xtd.|fXpCl:thUr)_ {\I57ZKiG !SE@ *B7 tA8y&!yQoc>UQ@tf&Da#&Z+r$9HP1q Ó=Y[^HŬ98Eͤ;9v~N5#}ܿ5^otx>E@=%K"t`uY7>~u^{"v4mjlɜO=xsrO;/ YhI fWȆZ% !%`cshb)H#?2J)YOAV5vЖ1B1]|̪n̷ |A,ƹOBe/vAޮ"Pm!r$1ٹC\ NFbvR\wt,W8E\ST>QE ZEsTG;*bb-Mzݝ}HqFyox[ 0ϐm4@6z{wş-WO?\}xy|ψhdZ V GG[C>i%-_ke؋eQl 0H99NpUS0Js. M 9y(d5yP%x%Lr*ΐ WpfD8L|wa#qDK-9ѹFbˑyg[hR3 'm՝} -+Oy MŎ?ЬB Ӻn۫*9_n7w?m /njAӶŽ[`soynS/dFͼ5Z#aM^ᯚǿFk=u=;p'^qYip0˩-MMlxJW#rܯÈʄBƚ 7';}&kokaTJGК>ү1V2FZ. gfEBo UF SB9Dzx 0lBjn1\h!j[`Ϳ PB+?BLwwUuzG _UCBkԐ{0p4;=2UM;:Tڛ*`yyv;ճl8--D NUMEo :LENtq9r'Ee.-5'N. 1p,iϺ(Vm4h|lbwy/Θ0/n\&0|>?6\|O?_~lh86̪ PTY"gc ζ^.}Ȱ=UO>dr>ߖ2ṰN̍'Vd^o!Clh in=WnRQƓXS+yQR DYJ!:PEע KaJUdO `6^۵^;Z;(<#a;u!B3ayE- TBd$u'Z3(8wt>נCa%LW9ֹRZiFmi 2ӢԊ @QɞٗE|/ S }Yj[-)_<@hkRU^U(!Z׶` \9\)ek!ҹ)ΎVZi Pm lKgiMcB K~-qzg@dc}HDM_K GTAgu@KGhRV=w|{{࠼w! rDR#{=ܬ=DJ)?XaRLB6̃q0+[?gca0@M,\Gמk[- ‹wt܊Hsɂ3e SE 4y0֜!dY;4 ?cPND`e:lK8T.z6`*Nj6<[ y&ڲ))̗<1Y!K7t1_A!qBmP[wyPUkd|Ί ,3w} t?Ό=vz|_* ؤ>nR?3?dfnmd+bJjM#lm̮bǜϧ_\)˽uk]]|CN4k3 ZRmS\^B]pUPUy#ICBbA=g;mf;\dd^'Om艅u>@œ|Oj 9SpOQ8@Cb`gCH Y0̏ϛC ?ɑ5JPDkVIXf՘eKFH*fMCf~()?MmvdW[frue/7l*g0tҌxΊhتBEnևzN" U2m%xihRL(it8,E[X⽪Rs\~bt6tt32%(CDLpH$$x 2sx:I1(a,-y%a/2XZKĢ\fKJjD &ՁJ}s F}pbr2^{UEap{WyA.4{]AÜ@j5ROܸ]ߦB.dROD&\I_hN=@WT#zy aEѹJracVyM*RVR)ʺ,%%\3ʺҔE`.-tQd.T T %/濢tu{)G93ZJZV5X*/r8H0R$or2j42QN9M2,)NW}W*!]&]c5!;mLA+`>Fj5p"[ lR3!'xt~]R988@^NP 3WQٷˈ-5 r|Hmc»<`1|4ȓw9qMF\wtX&n:1ox /Ѽ[|RwKa!oDl #!mлtb&ܥLTC{r/OB޸l =m Wf&w|g:{f0Ϭ2i\vzĨQ=9ঝƓ|S =h"sz2xoSSWd=3y?c=3_Rό`̴~=3Pό ,(T=3НS%^]όB5Ot/VZ }Suלt0bs.1X&$YD}SD4gky*7ogV|mbn~iBz?Gl|I}I~7Rph/Ŏ̍OVԏ>bR-VwҗgxV*JIH[^]U[qU-7{o K+_<,ͦ6h/A6./ ~wxhw ?&V3: abrH B$f3Kr"Ȕ$ ӳ#_Ľ9b?0b i2PX|k_,_S_Ǫ3ƹmk,gߜ팦\NC6͈'f J!KǛ=Nt=_?JB0X9G XꬖKKкT9r7+Z'n T{5l!zGNR9_/ 5z+˰N~2!l )I`ĉ3i %Ks:i)hH@dZ%90LS6C AH-dvJ<r ccYAWXf@(v+bny| uDȷrBu\b*uUAbDdh EDx,9eEf$Ȳ**\hK#3mawS|.!?ٹȟGb=v+IaũTJqYoR-BۯiJथdhzMh'C3X}^NL$bqJ'(C 5Qa*B`*b4 3T9 &U[OmV { >NiÐc@༿{7OdIEdhy/O<33do1W`ep}uzPx4 ^7\b"iqϞ늏>oD.fpRW}tc`,?~iF#`HQ'D`ab"0by$K9,QtY^/iyD`>o$㞽s Ky!^jO++&- RU]UDT g_AfB(0ϐ"PZ1qwmH_moNmY|簸ĶYl6kV YlH=f[RV$R]Փ"+H/ϗ{EKHs!.<׫TKZ['[4)8@BJ)OYJOZJNR#歐稾^ZSr-iAZ!iͩF5)KL܋`Ke^ĂjMRzR!gY}!JQSJ"TK)qiK)iRhHt14)Q'.q;e;<>&z@usam|OI,f$ :ΜU'(u֏ \ju//4iaQWZ5 *pFFIg qٺHB8~܅vbˌ@pHiZ#-ghChPQ Y"9 eZ2 76O d0~;BfV;A8}f[ (N#-deh1VsrEDdDz"VXIZs2I(o_Qpm=TUSzI9X3 UzHH=.Pݾ>[C 6(C)t$ּ|;Mqˀ}.M[q1{=sgw =e-7"K Z !FsL ԕΡ6hPd((ND35Ih,ͻ5!(H5"Og@үCyA4"hE %T9o%{# nx 9-4pɔk(DeY1},]=XƤ:}]l\T"*SA.]ThξS~ryx(86}5H{hs!@EPƦy 3'0g&:Mq),I4B{b9LkW?/52{o*O\__vh $ w+ua,avK؁ gT4增)3'*Nr-~T}d0uJA}(&y 5Q&, 3,,hDMU#MX*chHMGDVd"lKB}78?]gG%Ab%rGa/ZW$oӕLk]'sYϸ|ը߻ntWt}+Ʋ>clml~*۾4/7V}޶F-ܣzuH--(i#tiXqBh5 a,!d 0?;;c&:i%-+.:|LϷ̸/~5Ix(_M'n5aMrnf2#*rTl8㭼\}\\%8/q` 7;tnWlx{WMX@ Ym@9H?CyZmۭ T5g'_htK%%}6\}]Ғ93Dot]; ) "\l!wNO ڋvT4;4Gwy*qWi61/ww1܃ wABq cJM0B4Uߺ{yl1;f ];0(DCӕ^珔})S:!OB\</t"x6va%V7.dJO[7]֭.)SUb/`֭Dքq%SuSº Et꾣u;*BqfMnMHJ<0Cz~egEwgL;wcu`'nU,7㛫73/bF!)1kEȔVjEajEYZQLF^ Q* +'VNj?t]%\*}DM"m7~HDE \fWY7>8\=/O1„8 b> >X596pJMW^R`D~jj^͛I)vjvZ5X%/ywϕ\<{ࢼJ+pK=h7qZ 6 ]p"z !"#GkVx"+{n*(OF{UdD+Y{9œOL.uV0{AT (78FְRZ$\"ԮKxBn RYTKZ)@ISXRMRr* Bp=ph5LE"8:jؓ 8# gHh=9 jdOC059'7F*E.)c4/F='NbiH08^;͇k$Fmyk` =yhʥ-k]فg{U7Zަ҄b[o>%S|8ۥZ< 6.edFHHWjXTZ JaX!PDO‰P19r)q:fgr Рʲ|`v/TkRB5$9\y&<%Ӽ0ϙ[GPxsk[mnXZ0 u %@/9{-9I,r- ZKa/Wa# BU47VkZa8B+nT9x۸xTsa cՖ"RƵ&bH3P~%mV $P#qAֳ.ystDHx 9|T~% L0\N1-uv6j?}X uE3Ti=~M/HP%ލ_.Pv{ 11~ O/s|L;_lMJq^{iESTw;0qn'L 4H۹i26HLVr%c! 4!Zv;>ލ P,}SPG~>Y‹/x)i4&Js6Թ lK>f% 12㥖<έ^8l+AƗaI@.X͆3ȩ,Dp&:(P_e'ʸZgʚܸ_a̘lKzJ'm(@IkHlb u["DD"G$*Hda2&30c϶-snx>À5 hb6׭rgʮgZ̄ #ئY8sp6͊PR<5CYH{jf@3b3讋vFgH~=S5x/E(֥98FEtg0V;CT"hasR7Q ?zT5=7PS4=wᦣA^¤ݷpV3:ɹ 3' 8<^<Ænڭ#iA-eܿP%{_O/VV!}.jvW/n;#R>^| cY: A*s3W'# .9>F:oaQt Bnyӻڡ:fR4ÂCZ^spWTXWT'3yFP#' .8XŎ\3"忄sP ̱<7xHsta?Rz:WNn|Zc˪#tv'G{1c&q |Ha.uk/IZbJ !Yl\rnT{v9 ,KZ@tU\Υ<$*``/Mޝ[rȏ#UOɄ> A2DBw,q,TDɌa1c&'}q=eXKnrgzP Cng (,;ǢiE,T4:*4d2Җ5W`^4\ (`J{WzF7X#yk 3Qn=]eŋԓӃ1Ir $$"IIdcI$yJS%,#ۋkt__M߯AlaQ'Df%ycۜjB JeBn=#o}Y Qz[N+T 2^LCe[ʇ4Fޗɢ4_K2x/l]zQsxˬ,!{Qq̋r\mYpB9߮E^wٻιM,rX?L_./#>^~osiYD/V{|}e>64O6a7h%`Gξd#t,悃?|ySǵGg.k^3_*׬YGX T cTl0ݩPM%Y+lPJ،*IJQ=RP)UhD9J7R9(. 9+:QʰJDz^ZQ,G1J+r2z^Z!:RPE)a%wF[AAq6zCgRn(j(u>@PS#JYPh17TK6Q*_aߎˊyj^Qz(uRnYcTϫT+RQz( TeLcTϷd5QʨJY~z* 4ZWgRG]L"p̐ER)n$I 10MSsȉ2! 61S-2 DifP;(IXDLdB61Q(4Qږ Z1jZWxTW|mQTTwVlcǔ㽏ZߠADz({5Պ'3G[ ;.+>Ҝjxr9orC){> |(Ҝj7Jj}RERrkgR !IͰC vjM2|OtMz]_d5_eFZ)Q9O3)<9yu'?SXfxmү |'[m*y"2|* KM ltoҿ$Z.>䅴{ĮX0&HK P(G$1}dMII)슄 `4H?YEJfSY)en;J#+ԗHO})R!n,`TOz@<+5 M߲QTB3SLRd,|`폧;"tsfL쐔{dEE',^KCui3ox&?*O.8N󹸼ؙQq/ "0(t ~Ӌu ,,YWE}슓rS}J7%QdVLd*bik*6<{P J\ޒR:2cC-u+GVyG;UH%pANO+w2Tq.alj2Cr.O#~*0z.K֥' *"q""c0Q:F sl+Wa8IJH&::*åe>g2iXBSieLJSN0Hf#y)<*3HIDaxdumKMnQ^4c]f(ͣrfw.hoEE7 Ս_}ASeEWd|$Z,Wk1kz0i$A7ruaܭ9o Gsӯ'7 Ja^L~m!9?{j;˪^'"f9m|B!}k\5vƷpd,fL"a&di[L|a*uk]_j86t/|?!x8rQ%XNپޙR]hk̙y }]%}b>Yj^V92P033[S>n9?ZOef1L~w{*.[ѠMW8ݹAJRyӻߺwƌOT("[-ZGPs끓ÐH1i+>9iEߢU|~˥.P:N^Dz/sՊ-D;P7ZQ@)n(5TBRxCLQ{PMd:m}4VYuӣ 9B6HyVUUw e`^sJt}:J r4:?cr\QӫJ.NJP%uwc"(N0$fS$*-n6x>= !.nU\^7"elqQ2ܺO HܯOqM@> PaW5~y߰hv>T9(!.K2xZ V^^L8j*oiR]fRAuy%Tk#= %¤Tqܕk8r)@N2@ w;}E*c)Ә"Iig:AYyw H$ $Hj4*=7uf0$0pSptT 6^Rh&in,(#B$y#&ZKl3eԶm"e 41'ebGێ ff}]Rӳvn2O~֙Y_K]*xjɰ*~iz62.ꔽ_W!1hr{ݢ \V"V24Ri R"fT mIƤGJ=X̬1ʙbl#$d$3JOb#2 ͐!$Xi0E$Itshk7C+e}0C7ʑ9`uĥd&or*ɬl+6Ud>O̤Bw'g̜S${kYeh4:<5JJ  5:ry@؞Tp<=S2 [KlHƨ%U,~Xh0E;Tա}b(՗Of62G{)!AO%$ G =GŃoSx3n)y)gyUmxs*c޶e\}>פttat^s}d K t%{1ļhSGCӈ7W0%^zu.R\hD0|j^ڡ}5\yZT^'֫f:dO֯ctݭH21OӢm)P|xUB1p% O1^هwzZs62-W8X#IO?z rAn7}zC) U>UI!pawp[(\֟0L8j&!):MAn\uex ;"a򽋤 K-J;I)슄P2Ut""d뤙I0)_:QxOvI Vk%l|bAWKDdkfiLf0 $TL5G\#t**Ӹ<ص\Hc1WKR݈ot?8 ;^P,tѩBHg&`rqFE{è|Y{؍z5onN]/4<+Jwk_JNim߻FdvevN/z9to0뫫ifa i Jo/i_Bo& e0.5t{<`W&\ 'bKC? S0 ƨ#wD̿nIވY+%Dz ڣ3Iz_?nOfW@&73Xjj~bW^9 [ÜR#yFQqSdB)R*@3ҥ.HQ(19_*/෋h;͏L7uW)ڌFR{c*ovFcKPOhB_0|V)&F.2Q""FQ 2T P "/r PHRY)JSi^B),լS 3P1P^ZxyuJ*FícjŠZOɼ3`{7~ǧJ(nI~ @ZjwせrxZl@hKeOjV;{:@&?+%^'ַ7VAPg|svK$AO%H& W<R3 bV2_vrR05\=p;Q)=y{&7_]&7 ^m]bB{T'XN%ӆkcS.Wv/-l+lϷUq󓝖?RlSbE7dUbCx̿n_W}kI\S8?^FC5KZzur%ȴG3WogÆGs#C0H:Eh%cu&:A *;V+?ij)yJjr c^ 1!ٿ0bT W+[]=)Z RTy$ &wµ:7].ajN?ƯC!`)MXK_W%IP$^jT\o/pv Ä\߻Ka?yKcԃoڵ(8ygeOjA6 h2oMT>=֛ZTx\f1UI_a>x-V~ch}aJlP  w,٠ptUj&?^B*eaC*wy.eMLYGM4ʦNR-{Mځ[,>6^ t`-ӻua!DclJ|!b -Ӊ}FwoYtZ+lۻn]X'76EO#{W5N8s֤zgOnt}̥ &;_?mxJlܬo'VNX; nVnUu6~4p?*{W3N,kӭum*sWN}Ï#],nUYY@-~mR$2R%:jLc%UQ a#p@%LqTn!Z]׺VZ(d\VWX~ߑ ؍9Ti=tM&u? ;veȀdEef9"aP6WĤ27:-Jyn zǽw۩ bFV-zΘӍ>i*"J{@٢řo{鬙I7"4mwֺizi-UOzWc=|vu+4W[NtYcqN^k+yQmmɪ&5/yLʙ3ͫi;@PRrR"2%FRJKNIS*3TI3Ɛxw}PT؇fur<JeeȐj'M4ʅ&+0 'BJ$GNA@5sb94ݛ܅h!.1ܣ/}wa$E& =?Df)))adʪKIYTHA0k9) ҚeF#L~|! j1SxS Baw:7#ቡHN1{Ee&-B@keιyj %PYʲb5dDK,1G3&8Y{UW{Wn~U9725""T .DpjXYՖZؗASw?R$ʺ$VO&V8Qae}1owM,׈gdB%'8Z3L:b}AbL/k=5a# T{{ςyl3HloNPx|E Q=7( @]0gH eBvu7c`ZNmc*ǘJ > VzT/W>FS-7Z7CxXB4%edb_, `Z쉄.@J6"yM*& AgA ^AsA@p'8$*DougS"]b8{H$XCI0 )vF^j@Zr"*ΛޯX>EN)0rRs"ɿ_TO&Zg0ߌUNp71'MM4ʦR-8Sy7 @'e38YGNI-I;'kt)/'kM4Ʀ(GMd2A#"лşdB>ؔFP5kQB;d-d5xw]Dڕ^mzZ{7Kddwi~}㘖˛o"~T, jvZiH}dʗl\rZpݰ_ٿ7rZR\(fqΖlNկ_X)LpC9_/c@E{lj6Yrm ɨ9јQIښ-sr,3{U{F+>USz'rY]8| Ȉk'Sa'60udܾjH"T PhEaG`T7r z@n\ LiedT ))ҌA.5͹ 8.0WqcܬgzEv~o)yչ)yxg8c K( D#5E)C ۪)B򭚒xƖ}RmdEsQ$d2MT; 0sSjL(`(K[U aJ`.XYH,ѲZp 1$CCy%Qȳ|CzwzQIk!O1g T]blI!+* fcG寵_oz`기_\w3cs3`!쫤ND8Jn U=R Zj鵱rBR8.`#;R<Iv8첋oG^M%d)LJ#T:/yQj:8s-ʲ%&j}6kfd%ORQ3>݀@KQ|STOjP|'T7O§~W(eq" ZhN{Lt~Y;OG\¥mdcP(5"_^5!Fyq>r2^0ڤJTf(3%I0` BA2gY0@9DY]j/sИ . V2% YΨ"dC9JԤPv2MS?>-i;f%)^6)y UΟtIo_0e|Ã#l[L ~bRH@p`a)+0dfMn(%#)g%܀@).$vM',);7:04:7#剡J cВ<ḃ" Rt$5yIЌ75B3 kVM}/Ӕo.Zj$~,2JBve ~VDFr~*QHU@kW=xS)h{mcq8skݴn詻'd 3nF0p=1x.gάdG'[!Mω*|99 ;|x/CvaHCAdç0wb-vbGzp њLH)Wlz-XO~iG ̂]$rR]o&'ra`m.Kp-X{,a1:iQ-Xb/-Xg_H%`,mh}Aà^r4K PɁS:YqQɛʀ q!HO|E  fc܉>w8ȝ(=^ ѐөSA #'fN+O#ĝl Z{kvTrC({\8TrqJ.l36/6~ZE \Ѱ9?<>v῞U^Vfruziŵ{] S;($ < ijĥ oE3L'n:{Ҥ,yR_V|R8f3y +:1Lwɽx>-hAI{qَec/Ȝ=\"4 ?}iAziGS`Wb Ęf(@ŮހZnkUq_u'LAxLtK \!{PLYZM9K=3nQEkن2;QDZk" <% H ,c,q"s@9Hɥ(k9d&J3ڌ>Jƥœ dN;_ÝT_Q׉SЁZ#pA ˻w[*{C5[aόVz35 ]K\u fM'+(ZvHp_}k96X"-/ ࠪ}CC<}ϋmHJi҇ڴ$nvB? 5.tI I?{ۡtKU p6o ֭p25p0%H[%09G= +j-Q kg4'7\2?*qǾD>O߾CecQ%4dc[jpi:iy=B>P.Z/n:ɾsYٖzݓȽGv\l:2pcX5뱕]j Sqj`f8Ȫ˯ !Ro;1k3`zBl>;J3 rl m0,%^O_XFv!veig#9z>:Fl7bq1!p9޳d8KtQ( #J:jԺ?] '^Fes"Q$@JdZr;mep)kU( *dCSjbdX!%A+K4f(1KH ɘ#2ө ˜&M,.R5K}hu{io7Iۢڽw1QR,?.F4c b ŋ&o.ϓ{zscBO>bMϳ9N_2uAS6o D;4>:}h;ȵHPMR]ս+h<匁d}_:oK F; sZ2yI 402Smx)圤pI < 3Q(0-E Ӟlr1TxT\o쓛ivY+<nJ}˝2,@~1`j`t_\l(Nw^}uԊxW$qρsz w$cT(@1̘ܻ47pͬM%UKsA\R[ 8ax t2`lyz!… \n)ݷ&pƠ&=Ζ) P3y?EFB =^y2m7UP7^-oZޔrd:yf c(S8!ckJtmjjʹ%ƘqGR@yR?Qh[9Qt/XWN!v*$VհyJQ8OQ}J5)gi;OE`0Jb0c M%IAbX!JĬLL$Vf*9W><ձ,ɪSXE@ =Q1eibbcI 2M3A3u "a|J,Nš0}X;Q<ՆɧRx<wȍM )ǻh( 9knc"Eh<> ) Q<~xk^A~2DRi۩MN5[0\Uw_UǗ)*f\ҫCY4:,#v(I6s#.QWߧk(>␣Se*pe'h4$cvnT=4"Հ ~P2780 U"½PcJS 1Sfl3F@Xx#*+"B @>=>tg>= ╧)~e9ߠy2FCKgͯȮgX/眭ҪTHp}rSURW!eXH`ljc) 1%iL8l$ƻFFPjuvk;Ns1hר6#YKORKiHi Tr$:$F0. E&)QI"@2du9j&BMXPo6(B,xj6Jmθ!-4[wH!΅g= ZG0NX{PP@ ;/tc.޺RdY *"GlJ5.1ƜYroTQ͏BKQ}FN贵T0->?^f(tU:kki(k)-]:҂ju^OZK Zs&2#̞l..dW>~j-r!w .!6_X2 p"U8U .!,9211MR,[[,1`aA ]bbǯ;)*w2Q!2x5FR3 Q7w^/nY!7ƔzEu^ •s ,3B@N.)a@<] HY/ *65MA9xSMFi&!L3g 3YZ~K)2ۏI.1pyc;ǻsA潗BQ阚ǍOC42k3%mی@‰өL8REq1u2{7$HmY"(B1)7NOl.jnRDKp)4x+h,לKK:+q 3[F=4<Russ-2"4l;幘؄$J8qĦI+IR-3F:,5.)ߖ(m) A5?.Hu)ro0i@w?P@@w_ΚrYCT5Ox9gM߁irl{<)j|0x^!fZCXNqRMI-QK.`HȜew 2$Y Ր#a"dz!3~p4*)R82ڔ4 bå"KxR"Z8F|*5OcȶDԢJu[IVU(0WYuQIa!֦Fhڎ$zϖ  YP2l"zCwFz,ӄuAEم Odtmb\yvuVb,Rv~*i$?UI:ۚ6́ǹ_nh0Y1&*&{G3;Nw] h\KdT`/ޣ D!zI&^ =` =N;ͤm&aW%[g" ؟~Bv10,cx,x*F19fZԘ[.fkX,Ib ,]sO` tsMP4*%$@UfZC j#K$;N߷(#%G:Gܝ<ĶC*օ,VGB'-$_<|>(PE=N^8=x:{7u:|8->.K@:IsL\(w"Ȓ:EZnpgu<<.fg<{jۺ:\it޺Z}ϿueG}继&zNf[/bgoN7O2W2P=7?o)/"gד9ҾV{>h~|XolV/+?,/V/qUl'NOU붫r}[eDP;b~wx-V/$!q#Sȓnʴ*vK.;n/Om+ڭՎm h/p*_h7-%vK.;n/SFdcϬ_K޾ КqZfẎ,D͵^ʶBǓ%psG`JiniE[/]NԴ}\hs"s F5aVMBw7ʺs5~"JulУ\ g Tg"+#@d+ǸIRlGKt61I@Ӣn{]OVT&X@;E,9 YE\7OMɚc|VE{2  h4^3eu::YqD~E#3W=Yt0,#Heccs:ЧkYbi]Wk]J=sK'3PQP #&U10 =Pegj{,M5[tRRċruٕ4֞eeSF*]}CI j8?Y90t2z։?/"EyʐCe 3 O14ޗFv~k%k`!8ię'Ko)@r:z6 ýZ2¯Y$nIs qWd@p\#M[WIm8"Sh AYnl>L$'b{{3,bHW D(Ӎc1g t~Ʉ<_vըZIF3 4*w!hރoҴb=AIR0N9MR&J]"Yr1ZX.&sKh-cq$c`00DIdO r#C17vtlU6vࣚUܪ_8./c, Y%JN0 U'hاxxWp!GhPRܧLzsr},G5n/U#-dLa HQ8(ųZ8!a5 @Q55i^Vef+%i#\Lz(-ȉ|(-$9ι_qT8h䗾lʀ)s"`z.61@s}J±O vxU\ Lg7Ʉ-tzڀ$Q2 18ļD E)6[%x6IȤC:V0%&nCi>;ZDžhhi\Yއ`(DC6*XfHS/Gp[?0E`bY68`K TPdl>ӧXS>ţkxnJIϸB^D]u̾PB+@9#R4LWSٔdd|f6sr1U8k5I<iV1NZ7>ZIWiAοŒF[OD(IљÄmDGx] tj˝˗t_\BHTЩ \gGn}8BܱTʘml<=*cqnó!z._(ڪcHY:PhÈqZkz"BBŬ81J~% ȰY<2,9qYl'm:ڲAUtUyݎ4`I0r^BLaBŐ@$Ųydol Qoτ189jGꟄh\+x!j6_ǫy,x֗8*;g7 VVxIu1DC 4)*I\)ub6Ch Ÿ#Nbotݸ91tfq;6Y;ƃfb]^ڱg:#KSS_CFtN\$ҵr6Я2HvH5V!6'-q\8}Pk2_*1 `h=-G7#9ml6Fcps4ᝮwdsV5˞9i=(%Xv Pէbqiix~=S̋WףK?>ZMoK cj}y}Qḱ;bmzX>Wa_%sbXϧF;/a Kyq[ ?S܋0-lAWʞ]׈62{qy JL|5r?Z/Btlz W}"QP^$:.Z VGI9\ۛ׷?3CtƊpo=ץ%D i1|\> yiw*UbĆ\O}KW2G# %ww}.θ_xb h|yZ7\|]X;^=7VȨCvaΖޯ!+VR2俏׹/ÈKPN|WXs=WkoQ\wͮZf8E3޲;;Q]Tn Ea8 tB;^H!tr)J-1|&ܘ%XT㧶h@/Q'1$QZwP+nN L$Ul<gqȦyx~gvw qwfPvؾ>YcŁ>]#5LP-X(R1:E6b̰Vp)3|`ᶴ|m?tN-%r]ÚD ab"epM2: dC1 p3&5kŎ8( !׊Kɬ@,A؜^jbLK<`)W6CV2P3.vqNn0&E%"WFdEJ x%Wgl ZP~R!Aҫ1omܛ>wEi,m(8ÒTӵ`RG$MQ+ё&K+ι[N7nĕ67B!sOAЊY2ω^,o侴!i"Ȣ) ~u< 2t>j_B­(}Eǯ^R ȡo,tqbNfO"2 :J½,jwQmzh|]Zcc?](e@DRֺz\ℹ 2`;$텐&C;.Hy^NcVUyhŌ^f!i"~5G"~]\haUfj:G>1rQЦhtu<=iOTBw]uOH@__wh;)XWhғzbO4jdOC  P[,篇z pMڼ+XiBn0W/&ZECwര%obBJ^[_^[/%`sښ*JU?;ݻGûErmKG !#:jPX|y]OΧ}ns b(~Ir;zPdN7s?WKo0Ma9(4xad4?آme㻪Tc tL0k7GmiBc񚱐c,TڏM0/s >ҮByyL$jsAgV|<`*|Ȼ7Uel_ӏUhpL?WaFY# G')㼶9i}v;4 M~3&ѡKqqk 1ky2֙|i'I(֠&bĔnᚄ$Ug̷AE^ 8Y ׸iU5<6kI Jǁ|~3-tKb -E>^-1+{+fOx}n#Lc2{vzc-姕*/hVzp|OVaEӠ㞛5/~;@F &b/|2hIO@2x :iod py0KR·Og ]_G*cL^N0!Ǩvt)@q̚e9K#1eּ@ƤJPc;jNBQӛȼv-u#ѫAV@3i<𿓛42Ξ֑&~Yu+jRT`[Y9OM"+-m흕EV(,f5Q*4J"Tƴos\i| / 9^ޕcX>7wpL9ï UԚր O_8_ӎ7Feɹ4`F,iq*Xr$(sL6VXٛuOF=XؑBpK9ʳg0+-xHј}̒H,xQ4Y\ t-[[kȫCBDJvUegI_0jğO'~j{j >RFmچLdF6RZû^Гi6}ہZ.c U~f6*GOh FTCO!$ʼ%5]vKj@mm6pgHPu 7*d/ߎソRDof\H2{2+H3N@(x _ O*JjdH]K1Io\ iDPJpE^1; AKٕo!LQgޝң?^cQ8"f@hfvz96)֪I+Dy^eXiμ YkʳK\ ۔QXsqZ}^9QN-ݽ(6~:|B\-Jdm!fYS m8 VIRx{&ex{ n3 `{;_9lqK*eaZz?0O52vƔdӬ]yEɨ[ȼL/ 1*pyc&fG3Os8Ϯ&JG놃O^Hg1fb+ 2J+$2_i Ϲv]%P|CN㙉 0r2c\`tJ(nP-UN]_pCq}4Q)* X ^st&ӛݎ/j WsvygH Dq .wHb&!`.LvY/ganǻW\4FRHUaC7Jr`w5Ň!hooy7/d' ɗ>du@Ǩ1p$QpՕ&%5M~XC^iy#jIZ NEe<kLs@ ꏻQ^%MtFtxQ/B$FY!SqQY]`YS`9K7ywVPTnD7I@JCkw:~,W@o'` $X2p눘2K } zHA'达9+K& YRĘ8Djg0Ėd@=uA&y?^^}&arXV 8+3`N;8ň Q"R/v)BĬ/= V8旡`Q N(p HBW旅v7MMAn.# 5JK@d$FI};2ȶzPte+ankM獺yU|C,3KQ,t)1O*=pm%\\>=N53T5j{EԼ,UQJgPKs#*kCzUpݥvO_X4i0@j0@J1`k{}IŘ%OD;ƕz&ڨ?*j}:_ Q=o3_M^uSi`-HVẩ򵻩}p+nFGTXT*ƞkɿ}8+Ŀ~rA;sԌr=9ȳWhz@1bC%| rh-4iT~OӺ~ P]hj FښͨmcOn˦$J(l5>p ˌUֵ^{ cQ Np G]MդCsmSN>S?n9?Vu\=D#/iJdօ|.ʧ=y4)sحN{4n\E`ѭ=Hօ|.S5k@ c0¬0SSxSI5ꏛ_iyYs2E3ݑ<]GezEiW^Ln?]MF*}$g 0-\-O!! -gMT\tTC۩FsAO5ߔ!N\OG.kX՟gljByd\gw3}o{J_7n'vƨ~$ו{VK2c $rL_+G?FI6zRbhPI5 04S^hNɣYzbӲ`Ha9Tij59c.> NaR ,\1*t)&ΞX[^ %,\:iJO(GM\@0=:eζ]jԲr(DfQ)ɑ=BУs$Pl,1d fgŢ"Z,+LZŀ Yj1(DC oRyMr5F01^4(r1$D'#.2ȼɘ)9Y$e+mRg/u&4PXޯeD0sǐ+.1܅9D35h#%X= |ᦱp oL ="ڄeQQ=i[)`C̺qk P47IbT< ĴKf❋"c.gnLͅ.z. N(q\r\IS3Q{qK'dVtKWO/jTm"2LO'9D)vV MhD#5fY+F5TqúZ0w5N"f{1,"^ގrjGtʟnqnꜬzIUŢ\ oR.Ddڕ6dͅCȹO'RdsX=;)8P[ALL'R8X_.AAvӮ椦V.KB̅!Ck1$ũdn}tir"#c!5@Ghj8IJd.^͍YP'w2IUNȚ蘍 U pFI:0ՐHK 289mX[A ";yͳU1/#-RI2"mJ#L$^*ڰ?{W؍Y̭4'n2i$LW[ӲK#GdS:G$,,dC7f !1,S?i %G'Q4iKJڅk?G!ŗQExJ'頹"B*Sw -U4Ė0эsShzGw)^r"@ſ8ZvI8.@:E[؀C^mK.m㌊Fj&sJ铷6\Ca-b"h"hǙ@56QO\%uA(^#(VVƣ:X- FbYq!-w+6A*DmH^aزaeҲnھ:%kؗv6J'r;Q1s͗ninn"D_.WD%BEL](&l=7_DDGv&QVB ??\@Rf3y2t`z_q\>]3!;wzbtۀZ6JL=ʂ.їVT ^m)pI`)OZ),OĜ"1 ?/.R g'0ǍbֱZ@F qjPfRA+Y!*_4$긑mgacU0N WBEjE"Mvj"Q2YݻdA?x{cM"Fף^AԕPZds3 0 %yF=N otF &Ӌļ@W?>,)-& XjICLڙ=+ Qh<ܴ$.6D`ZW:* T@)>!3vUvP5da0(+qGmb3Ǧ^O_pmY#'}]Q5\T@UɽF(ηWoGхZ]~dyIճы^>~x}, AB>Bs΢~7o.T3qUHqew\"YTcڭ.>UHJ/Q"}XzpŚ}tܣ Qa[?K>vkTjH9`uCpDɢ#acpMl"L: i #"j{~@jMB5Lp~3ưݫ $iWi&(U-c:h\0@eg (U{k=@E ϡ08|psOD@) ϝQ t,6E uXKab0C!=mQPV2/F2rZF&j@*cC{KU6ٔDsvuMޣUmGǟoV+r{' #ގ|oDuy}vBE=^ :{{ x2t0N⯸( ~@'\4t8קj[ NBzRItꉽ9s=xљIr*Bxmg_}Mƽ 0f!QAIM6N%tv[ XkT4wbdǰ^0X 2OYYc "1CdԱ+$ֈ6"_B ( ` /t(.x|j]e)_4}7ʏ˯bO'DLkv.U㑟LGij L zfk1I jsBKMM)t1~tcyݚ tcv~0.ݚeF6,䅛Me2$ymyӟNrdMVuIr]haU(JzCIc$ r Z;$r΁Np| Vk5.Aر1QtOQ7rcUTK--]e\< DPܐe2r(,VsfPNxl ^Z@HB ۍSn@"0r`oͅ8pǀDdDž~e Rjo룋R09 zneʥl8@!Y -/F< Ty]5jgeW 0\s>,j}.G"!- a{sIPh_AQmCl8Rz a`+xW[!MR 絚 -q+8RR՛aUG9IW2.VJ;$=Ta^y Qsڻxd0s aǜ8%i@|OkJdAX%K")U,m,-w `#Q>TYD 8Jμu(4xmf1,`DG .q&_)ov՛_(vJD2ɻb0mfJFu\z$##ܕQ{DԽ;V ɢFK ťY wBI8ɀ8p.26Z$2qK|E!d|YT( Zi Sg4SˢGyPXn#B]f2uS&RyxFL!*$8!x\j0`Fix B   D\ SLŮ#h^ C츒V8Te+'-s۵R&1aKl\.ƄhqXZQ*vǺ 3!f82Aߗqܝ^ zx=-F֐?' 5EJw닔luE$I&] 0[4;L&;/#y%V;SkQ gkG8Gclumх+fzYZ{u+%6CSJlo٥a'm+G2;ģx/;aaGq[ݒNg1}<ɒ'>]i!uXźW5L'28;ovW$nGUֲkӓ]Zt&7:Wy}7F-kmbRۢIR}ޑJ4;7# -Z\5?‘9xw#*cG}ësf7Tͪ:CfgppڜDZ4Ө}G~/~/6Pa>$.U2$/Gn4(wtndz2O=vC+n}H\DkdJj|݀z ڭ%S.^^[j>$.k2U`QaeϽʎ_W򄍽LF*s<ݺ#e;{ntk[wD\~oXjx "At҄37J*D3%iAlo5vd/^ _w!-/R^'ÿڬ< )qgmB^DjXPzR(!wqt;_P9<>>_Rb}LtͿMc<ᚻJI~sOtM2O/{_7 rW=zSXDoߢp퉁@]KYUo9dI{t߁+PE|+~é(;p"M2J`ił"X M >1Z:`lUKŽSĪ]:h>k]6/%_7Xmњ(gy2K~ _W^z T}襌vw ơvQxŒ%7옷jvRdJ /"%?|aj'^)}`o8ıUFdnR꟔f,cBk}fhY`?2y2'OHʹJLɾXgMKal75a1U7#"sQ:ɖw|.!Ro,~3cRpm.)ao-/ f(#ѕӨƒTiXul=H.Po$ܕ_߅+4fscՋVy @ 7i ~)5%]wÔg'̮xuj!j f&>I2Bc!!-aϴ^uVQ :h= 4^LE("zP_B K G.ʤ{UFj.RzR*LJd~mcJ2)]@͕ReR 2ROCJ_Ժ5"g)PJ [55E~ުWtk;{vj1BjzboM4S0d8ίnnfݐٺ]@.:I;Zl%V ~h$>*eێ? UTWq7T*GEE>#g0v_3.ɤw-ԉ2azmD6M膼V=N,#"`#^XMMӄ"2r*J9r ۅ/GLsQwWď-8OV13Ői"osöK5V|?I FX.Riud%$$* ),SKBS1Dh#%+t Oj;.|=Ls|g| KR2476 v6nB`,أ=pp*-:{p|]_^}p FO.E݃U{wc$0`y ZX&T{5q*Oӎ.-~ z~%O+,4rV2.nƿXH%P~3.:zo;Pu[s\ekxguQQa@| ~Ie\IGlղ*%Z+"lY/#" Ʒ풧wTJ2jqY!E<  _Û@vГHH5ǧ1ctp*opRvћ}c(3yrؠUC" 7bTҤӰ!K*P4F>iI~F4 wE\wE2QYdxV%rO?#PBRƠ2T폫E~.T)}phםK@6(hHIĐ8P6% -M46%ˇ)0 N>x*jg3_Kbx'!>~Ϲ'黉qRDho;=2 "۳4vp9gɆ`$@5h,F"cx0Laab/i?!&ۄhR×QC5B =:B%q32 $I@DCGQ/)iXuVeZc)8-b?HM U;#4%(tH4fٵqd6aW?JI)o$ڸ+>"!H'7!y^2s>Brneߘ=6?@-ˊכ(1%xO,Z6ѢpUe(ݟ^K#Nyjo=AA5=7c7i"V8^ qQE WBU& C J:t[ 2jt؆4>( IbkxRltU";SDoB"fPN{s<&MlB(Hh h"=1 gpV )v~f.pSB&hƠFX$T4ۓiI.ʬ9t4P9#nH18zh EHb"%s}g) WP1[.ڼ86lMtѕGP0Hd*H =>=|yfl!~yƟgϷy5*ѣ6h;$4vxl5Uc}bxuDcrѷh#hJ¤^.[ʱz!A}ΗԐsL )˒KysO󼭠tzJކHat{:'&pZ/<j@'`B=I)Rh28($IB4z'ꢌ?SaG=͙Qۦ˺={m>gh_ؤ 'J(F4=M2nZl4ܰ=aV 5AQKjPq3TOj}[^-GX< ,iUD~(\dnt:|~)U{:iq4="Z^I!dXa_"Dqsɦ33s3 ##/k4 ,JXLf][o[G+v1R_!HIcIJ3ԍ%<)Gyp(|U]p.ZHV[*т-mhG@ݽhMfg C$¡OnԥȽ c:!*l, l@ش=:407IZ\f J/0 ,u~[KK: Τb.cYhFIлnVsFH(=RtXIzQe1KGZBtJ̳謳mI#jLWip2Pvp ?A ,6bz:[lCQ'N ǿN ZYg 1ls`~s2;kM'% 8-qII.$~=<{+ϭс]m|gRCV}"x1#^Bl,T”ףܖ*pxjLgUd/fgtDH?66'!/omX;]I$2X'bZiE Ӓao` X 3DA qi!ʙV`[gWZ,4ʼn{,+YkϞ Fl˦#w]QHu,M$ Y 헥DCw 0e[qkBکXd>@{v #jﶁv6hݳ]ޡlD?7'k֣p3T[M\LGaiz~3d!HiQtZhνX~WJcXEFHm,ʸ̾OjK,Ge+JXSUacEaȁ\أn-eHY('2_Ei#Wu+Rb4'oHp~٧U~qˎ%=JoўF^hgr Gد_[=.e|\_꧟n+~\\dQ^?5^?3_Dd(nʰx^ v%/};8{;ސ{@ƚp␃ߧgL_d(O(mؔ ‹Fڂ#֠^7yf5a3YUKPnmU[YJ,5? `[:JoAŶN-Xzggn8H,GJ6F^zxW~.]/+_$˯N'r[lCYI~GNQ٤ĪvW+|qŗ_4#IFW:y7?H%=x5ubКAlҒ>"NʃWW$߳ɄՀ=tOha糇RscXޔ9g|\%68 {jZ(_1c:%9l! y,8ZM`\myyt/\ OXlJ"r`&D*vvkqS6uzgc }Yk .$fvX%[;E1K1j}k'쉂4j"sɦ*یLlZ1=jے5kaik2p(D luS+P >B~BQ.ؖuߩzO# `grKK;lɒ8c~J18[ C*.Hi؈8p ةέ>|M87X0hHS mR2pn(F{}gb:;e> ggƋOlb=Wvl3c^(0fR糆!_ya:˝`KZIqf]eu|e9ռ{[!ЃTHZ 6QKZeTʡݹSסCti;dBW$fn_(QAozxy&hfQz=D3;E\ )*lFy\ (`(=;C?~ dumopY NJUU +HBoM?YkQfJiÊ,X CҁY{=vfC'qam' MNhS[fmmNYP-ڙ^NY [CNrgT 6$%,F&DQT&gnQx Ӵh gСAgKBW"=^N[d.Y 4|՚s{1Z0^(AL|}\UZrk5BUv(Jzx*S:1,w:foxvAR9Dz9eh ;.] %d|9 +#rfqHmm] ^tD[\T Yޥ,<0Ca5( C~7>:kzmJ^E CscFb= Lm'l+";.Zƽ{f%ϏN<Ϡ*x~̡}Ŗ+CP*Xvz]qvlXMZrOD $;u. odd}EdQP'Z{b OeW&%q~q<]tDJdU3·Ιq)evo~ixtHLOlW#dk(XT"MC@kԴcۮkDVrbqFLjshȆ"ѯGmSCNjWuEvR)PP1׃OU3QgRUTR8kh7#j]w?{ ڭIYoNZMՐgM1hSn:e6PZ]٥0q fwhd+Cċi{mlEO[`sM%If+SO@l;-/#ݝܟlS$&XSBgYp&}HY0-/dBiS%dwKVwRNVJF T Ė7ԗBr=% [6z ( DKUq!j5\x$8;.xt=}|JaɿN6ӊ:yLBu [13ǜ= mk g1gsS)k4gv(|-EA5iP]'yT DXK&.e^o̺V 1o^Yy!cm`ůэ]~XݓJ@Jt33~ʤ{e4l5oi.G7.ͷ~8 ^ف[=4[~ֹuJU[+yxֹW$WǺ,S<8WP;cz?2~#t&ݠsTaqv֞{عrk es&D=>}wHlGBF'a͗[ܪ_g?jP .)j p!圬wb4V2ѹoӡրw2\m D<̘\`?H ͚]l|zq9N zMo{9G2]{|/oF]Irc=2ћJoO~|7YSB2^7t=xh"A6 iOڰNlݮ ayhjmXl1NvIB^v)kN?n*C֕9m|S˶ݺ7n}H .2%;m|@ڭ+ rD3hEMkc~2[EtATLUF,tJ($=7Wi:.cIQ<ijhi>bf^@27At{E1uų{jd(`}˕SW"V0JAA1Yȝ@.k-}0QK\;+ݐӦ M&ẗ́&ﲛrͤrqVfz*26u__Cq>@Yd#qiWJ O^.\MJԧJ" zEuA f4}7fru3y|ޅD.m9U<ɼB^5 vЯͮste7+ћUT535HӬM30Vcie6u.F-4I^qʸTK0橧7Q}@FIFQh^Fzc[VgJ#Y˫֙pJMQF1 lܖ2F,F"cΔ` Ҹ2v#>NhM kYҧoc2cR0Y\>yjTܽfq=,. 0OJ-`~^dn;oȗX^^~"iAے 1*"j %.(lJeZL9.! }YeZaV[a8(|bv)5ĝI|BaJ,FӀ!xs\S7oZ<bӧx!ۺj_:,;~Tn4x͖?ߵaeվK1 %(M4T1"*Qi[ޫt(Td^>VrjʹoQߏFi;sNd@:3un d ͘)Ӛ%=dXזFuTmKeѭ aT6TP+I"_m9z c 3yN .dmpg39VH^J]u<.37iCeF03CKBK-~)%Nh[-x!{m#]Dꪪރc+s5Ɓ&ZafC&0B J% =ľO̤,M-8ֱMfrU=Eh\*SO^V e8Ag*GJ eltJ-- M-eE]43PcA2V*I*(3zӮX0*74y_ Zj0$㗱2$I3bT0Ssv+ J3 w銰aT #|WЅ4[^_]Y|x Bp÷`i%ezMoK /ۣ$:MbOCeJBOq UTUh.*|(" J% ֥f]i,+VI&Aad")Ǡؕ'<^\FS.:R!첦TPH7F*e~S*73R[b&c +RM2A%zxsS B דjIDRKRjѓN;A ?dQp;P^F(FgRA#KѨ$^OQV`/FU AT^9q"RJf@49 # )ZSxL>k:|<ފ84+b儣gzX_x#l ?<,FT ^j-`%MO=zjG.Z\9.`,3OeVRZ-++z6=/Ah$M5C[[3)-i­G<|4F"YJ Ԁ2'J>eH{ɱغgrO\>3Vڂi_mtۅPRuJ%F 5D9U U8Z~lB\oU7n[T ՞\/C-RQ2"m!5QV]1 | @Ul<`+VF$4 ;.=FpRvcH~CltAT @ ^|f+ҜƮtEGAf?YDէ;{n='P;t2g (rۙ -f൐+(kȣ9 9x:'O'#uA?vG_Lk D,'oߏŤuʥ/"ŝ:4&7 ]WoBdj+ī~e4M":їv7k 4=-kk5 i8 ycQ#?խGp ϥ ux1B"%/ 1$ץq BsRJU MoyfԝBk^ Q*y]ttAU"\>sB3?v~:`(jCXE \KHeDGtrH-k&N%`6 i.kJKsHՑOE)bhC͕zBqz,%"RNd `"" `D>!`6HesJQDiBE ":})sW|%X#;S1%qW䣾j%8S/ϛ6[[PnRS]\Ml`;%|*Hb0#8o)+||%J=Iɩ~1*쭦Z8s銭Sx3S8-(ޛBtږb!_((+(*9M1n,X-SW&6 ;;cl>ׅӤڑJzXDIwmmJsBhU凔My:.SV8[ pH IRʖ)p_h b1ZD*roͅBDW;vrT\G'B/#xr p`+<c\Ad! BTZjv-SLR좍'P$y,AK6T QEa&(IʄHI-9r'"\0{՟|`wG)zX }'B-O9*J7w*Ȱo #I J<w'"ӰRRuXl}Hn%dS<&?f0iڪ)vd'߷U*̛^Hn &ariIBkt|/.+fv "_ e9;}EPᢿt^FGPwS98D57V+yRFQ9RG! P J(6_7zq寠,q'- 7P7Q^ RIѨv+e$ΐ)^=Y?T`\" i|y{T^,icEw_e9"5TQ7JXXJK%Bд4 ߣuHw'N'cs ߅Y-i$#Ql~^p}d|O Yʏ9pX< ) W,flcO+V86?a%0E8SM_S-;=:}Ar? wdid)׺5>V}=⿇,W.֤qb"`#߅6 Y3gT0ϚeI 3PF ܨPsSiw2w\\`jmphDSTiȪd؜9ud=X ,ܱ i'gւ$3(hu6x5׎cEm<~^L2Cm3Ćk!̷ߜl1 rQz>_Fnrrxyy }۰>cӏa|Ps2ȏߧi{Hp{ 'GX}z`zmb뙽jҚ[4[ n4z4x7|v~U EV {Gq4B+Eؠc$97\ĚUFcoO ™ƭy2 $Be(ŭnߤ2,h!%֬\ j k  ߁YۺnQۙ6_\ .Wƽwz}<Y]^?&pIzz6k8H%"ޡj5y\M ?3{ܟG]];v_g2O6sO\k{od`ه"pAzpL4 =AN?Ϧ9I{L [٧;Sڽ~Lym`ٵ[iu *M]Ee,u'.Y25GƦvS([.bD')p*}k[.bD')퀧c4ilBc[ y"%SeS2y1f#v;ͭԟv&4v+ !O\D3dJ 8nGwh.c}KH9)s"|PфrԎ`-槷c|chekn0FQo3Y*4HEB+J@ (Me-z:~vhlG%^˿"8woDi./AR!j.wڝgMQtu?m`)7T(*+! pi:YDf` kII0fX#W|.OL'0K4I6k9k na(q|t̙ ] Tm5po&uNe"i)_f/0rS)|fA}ǃ8]f4mg5Sm燆V&Ctjْ(ְ5,H$dQBhA4b1[.'hǩo;rX7ng5:tWƵp`^Kܞ$ R:h܋.)#g)C: 4rt0:ԣu3M"NTSSҙ pU\1}yOLzLs[s@8;ޱ׏yF7|-^?"QpL.^p-H%6:ܾpQm upvE.ޫfJqZ'1Gi0NY2wA2"1i@2Dϖm"Fr郳+]ƳmYܑY tpN>.vᑳiXT>-NufݧS頤AIh{G{HJ \ ,#=d? |29 ,xPrn|FLtrHR_c."wٮt_7d$w5#[/c‹#Ă$e‘濒χH,P<Ie 'r0Wtq#D81jи6 V+&JFW;2fZw\k4 )h*iEF2ґc Vx:%v[߾/bFɾYc}э:3{vDu8*B5mہ];TħV[5­3ɛn,wR^z:=|q>? Pl4|E =C&, hIX L$%ETz8Hz&aIQ5%jJcX)DRhw蕲Ĺ*ruN&$J@Ê[X% ܾjp!,+o?7շ n!z{C("x0se(pvvxQEo`RSZ*6=ab1wY_uU;wdy|;ܞC7qxIQg¦exWa1Y$ t9_>5*"#Vk`j*,1f4Ȧ!X(ki!#$Z0j"Au7d`Q/(0g=X>Hn j͕N]I*e+ߘ2lieH41'UJe(zW(mse-?ɷ[:\Mbv7_mdz^G%[nm2rr9V=%\q ҩmQ1dn-Vr jfmF5‚ { ߭{X սň82<ŠK$jVj$*# YbhnђYnahJW gAW_}nK^_~kw4>KuWj~1/F(ѕ[s^Qͯ_ߜڤWWv_}N@C7˫S i9۾3ߤDo_GI4H`echn8˅Q4RiHL,xŅ{'l$Fg#N$cd}%{=rX;6ȣGFtIEVy&f&L~dJ)-B2'ON1I мE0>;39_ }::؁ۣ,vݚwL:Wa览Jyi* (; sX 8F6=Jӏ2_4bEjQ.jBu.R. -!!-ョfwَ(RW3 QE푭^^iqAًͮqR2 <({f>KU`IdNu!?wHc \GwLGpR &?\N!+*o.g- rpf}EyЎՄS] gd̚}5 0*~Y5aAM6ZALw4Z?VFCdE4K;zSɠ[.bD')֗T޴[~$Zv+!!O\DsdJ!uv1h\ĈN;RO=>izn9ڭ 4xp/3Ҥ(rN^l6 #L;g_ eC,ߏF(Na9_hߣ1: xHpZE=ŝ qc޲!t(|zk{>חqȭ=#2>, xHzqzF czrJLlWU!x(Dx72Hz)STYӕKa80;0GĀH"`$ǨLe\z@uUTEjRc֒JWHÁ1 Wn,\AQ / 0RxQGQ@m$/R[aBVh$nLc52Yi,FmD(Qa9}Jӭ1Nq-?唐o(\/Y"]5 ykZT POD 5xtvtHnF {BFQ÷`kGŌsA)ϮVP! (禼Wf(^W;949t!zpnLy(Ll]/4]s?eSC <{ 12525ms (13:43:31.857) Jan 30 13:43:31 crc kubenswrapper[4793]: Trace[1018025099]: [12.52582542s] [12.52582542s] END Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.857663 4793 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.859468 4793 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.860602 4793 trace.go:236] Trace[885684309]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 13:43:18.545) (total time: 13315ms): Jan 30 13:43:31 crc kubenswrapper[4793]: Trace[885684309]: ---"Objects listed" error: 13315ms (13:43:31.860) Jan 30 13:43:31 crc kubenswrapper[4793]: Trace[885684309]: [13.315383353s] [13.315383353s] END Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.860631 4793 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 13:43:31 crc kubenswrapper[4793]: E0130 13:43:31.861224 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.884375 4793 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.913452 4793 csr.go:261] certificate signing request csr-l8s42 is approved, waiting to be issued Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.924637 4793 csr.go:257] certificate signing request csr-l8s42 is issued Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.312962 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 04:03:23.19085244 +0000 UTC Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.475737 4793 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.532249 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.532843 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.534304 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506" exitCode=255 Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.534346 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506"} Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.534381 4793 scope.go:117] "RemoveContainer" containerID="ad92971cceae3d9cf75d1d1e68209c1c214fc2d070e69e4f9435cb07579a96de" Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.572632 4793 scope.go:117] "RemoveContainer" containerID="da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506" Jan 30 13:43:32 crc kubenswrapper[4793]: E0130 13:43:32.572866 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.826857 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.925381 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 13:38:31 +0000 UTC, rotation deadline is 2026-11-15 20:09:28.321009213 +0000 UTC Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.925425 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6942h25m55.395587828s for next certificate rotation Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.279561 4793 apiserver.go:52] "Watching apiserver" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.282082 4793 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.282529 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-mbqcp","openshift-multus/multus-2ssnl","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-ovn-kubernetes/ovnkube-node-g62p5","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/machine-config-daemon-rdsch","openshift-multus/multus-additional-cni-plugins-nsxfs"] Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.283511 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.283799 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.283925 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.284178 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.284245 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.284281 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.284419 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.284456 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.284539 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.284592 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.284652 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.284957 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.285195 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.285460 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.286484 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.291471 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.291564 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.291687 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.291710 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.293581 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.293904 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.294005 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.294571 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.297087 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.297539 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.297552 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.299854 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.299896 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.299910 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.299849 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300010 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300017 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300037 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300134 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300161 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300216 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300219 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300256 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300408 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300512 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300589 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300713 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300831 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300989 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.301722 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.308997 4793 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.313250 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:59:54.45176892 +0000 UTC Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.320240 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.338192 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.349923 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.359359 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368347 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368388 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368403 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368420 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368437 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368451 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368465 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368479 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368493 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368506 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368520 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368534 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368548 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368564 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368582 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368596 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368609 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368625 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368638 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368653 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368666 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368689 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368709 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368723 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368737 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368750 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368764 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368777 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368795 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368810 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368825 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368839 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368854 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368867 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368888 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368902 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368915 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368929 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368944 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368957 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368973 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368986 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369000 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369019 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369033 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369069 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369085 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369099 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369112 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369128 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369142 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369156 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369170 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369186 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369202 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369216 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369230 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369243 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369256 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369270 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369285 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369299 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369313 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369335 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369350 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369364 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369378 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369392 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369411 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369426 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369441 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369462 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369478 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369494 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369509 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369524 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369538 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369553 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369568 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369583 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369597 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369611 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369626 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369641 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369656 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369672 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369688 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369704 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369719 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369734 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369750 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369764 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369780 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369795 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369811 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369827 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369842 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369859 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369873 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369888 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369902 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369917 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369936 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369952 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369977 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370003 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370024 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370062 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370082 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370097 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370114 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370129 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370143 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370158 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370173 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370186 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370201 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370219 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370236 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370252 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370267 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370283 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370300 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370315 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370331 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370346 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370363 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370379 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370395 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370410 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370426 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370443 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370458 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370473 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370489 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370508 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370527 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370557 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370572 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370590 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370605 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370621 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370637 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370653 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370669 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370685 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370701 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370718 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370734 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370751 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370766 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370782 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370777 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370799 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370814 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370830 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370846 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370861 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370879 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370895 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370911 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370927 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370942 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370959 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370975 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370991 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371009 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371027 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371063 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371079 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371089 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371096 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371145 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371277 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371301 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371329 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371352 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371378 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371401 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371420 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371441 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371463 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371487 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371511 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371534 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371554 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371576 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371597 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371619 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371641 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371662 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371682 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371703 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371725 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371745 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371763 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371825 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-netd\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371855 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371879 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpthl\" (UniqueName: \"kubernetes.io/projected/4a60502c-d692-40e5-bbb7-d07aaaf80f10-kube-api-access-xpthl\") pod \"node-resolver-mbqcp\" (UID: \"4a60502c-d692-40e5-bbb7-d07aaaf80f10\") " pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371904 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-kubelet\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371915 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.372163 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.372445 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.372809 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.372851 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.373036 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.373074 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.373471 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.373723 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.373799 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.373940 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374180 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374335 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374366 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374557 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374576 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374756 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374942 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375040 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375264 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375467 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375506 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375539 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375547 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375557 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375668 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375707 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375785 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375861 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375910 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.376095 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.376163 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.376270 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.376391 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.377139 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.377552 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.377936 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.378236 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.378379 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.378492 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.378702 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.378900 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.379107 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.379337 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.379535 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.380360 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.380576 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.380825 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.381009 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.381228 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.381475 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.381609 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.381681 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.381975 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.382815 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.383023 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.383756 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.384376 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.384681 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.385746 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.386112 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.386372 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.386558 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.386762 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.386843 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.386866 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387280 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387297 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387408 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387560 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387409 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387740 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387876 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387976 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.388236 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.388260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.388615 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.388872 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.389109 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.389132 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.389157 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.389972 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.390715 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.391022 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.391116 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.391311 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.391555 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.391801 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.394233 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.394505 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.394910 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.395512 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371924 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-ovn\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.395724 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-config\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396392 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396440 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396489 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-multus-certs\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-slash\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396559 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-log-socket\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396582 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjsp7\" (UniqueName: \"kubernetes.io/projected/f9dad744-dcef-4c9e-88b1-3d8d935794a4-kube-api-access-mjsp7\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396602 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-system-cni-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396654 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396680 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-etc-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396839 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-socket-dir-parent\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397644 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-bin\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397672 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397695 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-cni-bin\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397719 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-conf-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397739 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxgc5\" (UniqueName: \"kubernetes.io/projected/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-kube-api-access-kxgc5\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397760 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397777 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-systemd\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397792 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-os-release\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396925 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397082 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397082 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397162 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397232 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397255 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397302 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397791 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397986 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398072 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398092 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398207 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-node-log\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398236 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398260 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-cni-binary-copy\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398297 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-netns\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398319 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-kubelet\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398346 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398235 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398399 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a60502c-d692-40e5-bbb7-d07aaaf80f10-hosts-file\") pod \"node-resolver-mbqcp\" (UID: \"4a60502c-d692-40e5-bbb7-d07aaaf80f10\") " pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398402 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398423 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-var-lib-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398501 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398522 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398534 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cnibin\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398644 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-cni-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398667 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-cnibin\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398683 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-cni-multus\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.399304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.399901 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.400558 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369894 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.400961 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.401317 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.401499 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.401716 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.401922 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.402287 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.402843 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.402991 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.403132 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.405224 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.405359 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.406461 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.407310 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.407362 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.407415 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.407699 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.407936 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.412660 4793 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.432384 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.438366 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.438529 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.438612 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.438805 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.438863 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.439000 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.439202 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.439270 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.439555 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.439606 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.440204 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.440621 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.442401 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.444493 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.444663 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-mcd-auth-proxy-config\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.444781 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-systemd-units\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.444873 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-netns\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.444967 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-script-lib\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.445081 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cni-binary-copy\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.445179 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-rootfs\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.445288 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-env-overrides\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.445381 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-system-cni-dir\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.445471 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451440 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-k8s-cni-cncf-io\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451494 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovn-node-metrics-cert\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451539 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451572 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-hostroot\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451603 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451632 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-os-release\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451662 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f6pg\" (UniqueName: \"kubernetes.io/projected/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-kube-api-access-2f6pg\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451691 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451720 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451747 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-ovn-kubernetes\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451769 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8km7w\" (UniqueName: \"kubernetes.io/projected/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-kube-api-access-8km7w\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451797 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-daemon-config\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451855 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451885 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-etc-kubernetes\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451940 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-proxy-tls\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452118 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452138 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452152 4793 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452169 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452189 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452203 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452217 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452235 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452249 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452262 4793 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452276 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452293 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452307 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452321 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452335 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452352 4793 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452368 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452382 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452400 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452414 4793 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452428 4793 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452451 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452468 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452482 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452495 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452509 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452526 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452540 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452554 4793 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452572 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452590 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452603 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452618 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452634 4793 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452648 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452663 4793 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452685 4793 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452702 4793 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452715 4793 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452730 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452745 4793 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452761 4793 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452774 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452792 4793 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452806 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452825 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452840 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452854 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452873 4793 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452887 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452900 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452913 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452928 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452942 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452955 4793 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452969 4793 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452986 4793 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.453001 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.453015 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.453032 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.464811 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.446262 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.446379 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.446428 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.446477 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.464913 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.464927 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.447037 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.447138 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.448680 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.450504 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.450708 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.450783 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.450873 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451373 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.453406 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.453469 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:33.953433524 +0000 UTC m=+24.654782015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.449393 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.465385 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.465817 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.466002 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.466022 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.466224 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.466305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.466523 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.466550 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.466564 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.466958 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.467156 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.467261 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.467782 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.468495 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.469891 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.470892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.454016 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.454706 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.454772 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.454831 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.455157 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.455181 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.455492 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.455717 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.455753 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.455978 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.456033 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.456197 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.456629 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.457338 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.464265 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.464616 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.464727 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.471136 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:43:33.971112634 +0000 UTC m=+24.672461135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.471314 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:33.971302389 +0000 UTC m=+24.672650890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.471507 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.473140 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.473256 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:33.973236886 +0000 UTC m=+24.674585377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.473937 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.473965 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.473975 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.473984 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.473993 4793 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474004 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474014 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474024 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474036 4793 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474060 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474070 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474079 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474089 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474098 4793 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474107 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474118 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474132 4793 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474143 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474176 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474188 4793 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474201 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474212 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474224 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474235 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474243 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474251 4793 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474260 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474271 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474280 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474289 4793 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474298 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474309 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474317 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474326 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474334 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474344 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474352 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474361 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474371 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474379 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474388 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474396 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474407 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474416 4793 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474424 4793 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474432 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474443 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474451 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474459 4793 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474469 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474477 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474487 4793 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474495 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474506 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474515 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474524 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.478581 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.479296 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.479560 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.481648 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.485557 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.486676 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.492523 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.492602 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.492640 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.492860 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.493256 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.500296 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.500331 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.500345 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.500400 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:34.000383127 +0000 UTC m=+24.701731618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.515355 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.520023 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.528562 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.548405 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad92971cceae3d9cf75d1d1e68209c1c214fc2d070e69e4f9435cb07579a96de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:16Z\\\",\\\"message\\\":\\\"W0130 13:43:16.323216 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 13:43:16.323625 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769780596 cert, and key in /tmp/serving-cert-3571744094/serving-signer.crt, /tmp/serving-cert-3571744094/serving-signer.key\\\\nI0130 13:43:16.518841 1 observer_polling.go:159] Starting file observer\\\\nW0130 13:43:16.523129 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 13:43:16.523353 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:16.524369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3571744094/tls.crt::/tmp/serving-cert-3571744094/tls.key\\\\\\\"\\\\nF0130 13:43:16.810880 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.551530 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.576401 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-hostroot\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.576664 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovn-node-metrics-cert\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.576761 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-os-release\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.576839 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f6pg\" (UniqueName: \"kubernetes.io/projected/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-kube-api-access-2f6pg\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.576914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.576992 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-daemon-config\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577113 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577218 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-ovn-kubernetes\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577304 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8km7w\" (UniqueName: \"kubernetes.io/projected/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-kube-api-access-8km7w\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577384 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-etc-kubernetes\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.577443 4793 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577461 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-proxy-tls\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577600 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-netd\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577681 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpthl\" (UniqueName: \"kubernetes.io/projected/4a60502c-d692-40e5-bbb7-d07aaaf80f10-kube-api-access-xpthl\") pod \"node-resolver-mbqcp\" (UID: \"4a60502c-d692-40e5-bbb7-d07aaaf80f10\") " pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577762 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-config\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-hostroot\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577913 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-kubelet\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-ovn\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578092 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578163 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-ovn-kubernetes\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578250 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-multus-certs\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578335 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-log-socket\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578414 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjsp7\" (UniqueName: \"kubernetes.io/projected/f9dad744-dcef-4c9e-88b1-3d8d935794a4-kube-api-access-mjsp7\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578480 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-etc-kubernetes\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577397 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578620 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-system-cni-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578702 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-slash\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578783 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-etc-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578861 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578970 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-socket-dir-parent\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.579093 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-conf-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577691 4793 scope.go:117] "RemoveContainer" containerID="da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.579294 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-bin\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.579405 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.579442 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.579856 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-os-release\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.579890 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-kubelet\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577814 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580088 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-config\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.579187 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-bin\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580379 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-cni-bin\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580471 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxgc5\" (UniqueName: \"kubernetes.io/projected/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-kube-api-access-kxgc5\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580541 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-multus-certs\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580499 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-daemon-config\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580514 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-slash\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580524 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-log-socket\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578133 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-netd\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580630 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-system-cni-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580686 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-cni-bin\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580704 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-socket-dir-parent\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580707 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-conf-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580769 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-etc-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578811 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-ovn\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580984 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-systemd\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581090 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-os-release\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581188 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-cni-binary-copy\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581266 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-netns\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581355 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-kubelet\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581439 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581509 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-os-release\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580478 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581573 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-systemd\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581602 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-netns\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581685 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-node-log\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581770 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581906 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a60502c-d692-40e5-bbb7-d07aaaf80f10-hosts-file\") pod \"node-resolver-mbqcp\" (UID: \"4a60502c-d692-40e5-bbb7-d07aaaf80f10\") " pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581986 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-cnibin\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582072 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-node-log\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582122 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a60502c-d692-40e5-bbb7-d07aaaf80f10-hosts-file\") pod \"node-resolver-mbqcp\" (UID: \"4a60502c-d692-40e5-bbb7-d07aaaf80f10\") " pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582142 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-kubelet\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582148 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582174 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-cnibin\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582333 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-cni-multus\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582422 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-mcd-auth-proxy-config\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582509 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-var-lib-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582604 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cnibin\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582681 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-cni-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582758 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-rootfs\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582837 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-systemd-units\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582972 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-netns\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-script-lib\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583203 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cni-binary-copy\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583289 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-k8s-cni-cncf-io\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583375 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-env-overrides\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583466 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-system-cni-dir\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583559 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583706 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583795 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583867 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583939 4793 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584008 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584102 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584174 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584243 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584325 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584408 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584491 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584563 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584645 4793 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584702 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-k8s-cni-cncf-io\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584718 4793 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584769 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584781 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584791 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584801 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584812 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584821 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584829 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584838 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584848 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584857 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584866 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584876 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584885 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584893 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584902 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584910 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584919 4793 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584927 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584935 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584944 4793 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584947 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-mcd-auth-proxy-config\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584971 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-systemd-units\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584285 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-cni-multus\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584953 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585003 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585017 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585030 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585043 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585077 4793 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585090 4793 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585103 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585115 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585128 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585140 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585152 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585165 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585177 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585189 4793 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585200 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585212 4793 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585223 4793 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585236 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585248 4793 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585261 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585273 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585285 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585297 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585308 4793 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585320 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585332 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585345 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585358 4793 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585371 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585384 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585398 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585412 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585424 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585435 4793 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585448 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585459 4793 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585468 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585478 4793 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585489 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585492 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-script-lib\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585501 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585515 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585527 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585003 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-netns\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585579 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-var-lib-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585610 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cnibin\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-cni-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585689 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585723 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-system-cni-dir\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584253 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-rootfs\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582031 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-cni-binary-copy\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585865 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cni-binary-copy\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.587871 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-env-overrides\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.590579 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovn-node-metrics-cert\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.592541 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-proxy-tls\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.604294 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.622147 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.631659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpthl\" (UniqueName: \"kubernetes.io/projected/4a60502c-d692-40e5-bbb7-d07aaaf80f10-kube-api-access-xpthl\") pod \"node-resolver-mbqcp\" (UID: \"4a60502c-d692-40e5-bbb7-d07aaaf80f10\") " pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.631918 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.632629 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8km7w\" (UniqueName: \"kubernetes.io/projected/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-kube-api-access-8km7w\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.633892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f6pg\" (UniqueName: \"kubernetes.io/projected/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-kube-api-access-2f6pg\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.636230 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.640613 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.641238 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.641969 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjsp7\" (UniqueName: \"kubernetes.io/projected/f9dad744-dcef-4c9e-88b1-3d8d935794a4-kube-api-access-mjsp7\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.644029 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxgc5\" (UniqueName: \"kubernetes.io/projected/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-kube-api-access-kxgc5\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.648227 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.656275 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.661342 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.676619 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: W0130 13:43:33.683650 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-731b4709f3f6678be66b16e755d2d8f8debdc9f716e1f6cbc598201980ee2a52 WatchSource:0}: Error finding container 731b4709f3f6678be66b16e755d2d8f8debdc9f716e1f6cbc598201980ee2a52: Status 404 returned error can't find the container with id 731b4709f3f6678be66b16e755d2d8f8debdc9f716e1f6cbc598201980ee2a52 Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.688566 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad92971cceae3d9cf75d1d1e68209c1c214fc2d070e69e4f9435cb07579a96de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:16Z\\\",\\\"message\\\":\\\"W0130 13:43:16.323216 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 13:43:16.323625 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769780596 cert, and key in /tmp/serving-cert-3571744094/serving-signer.crt, /tmp/serving-cert-3571744094/serving-signer.key\\\\nI0130 13:43:16.518841 1 observer_polling.go:159] Starting file observer\\\\nW0130 13:43:16.523129 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 13:43:16.523353 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:16.524369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3571744094/tls.crt::/tmp/serving-cert-3571744094/tls.key\\\\\\\"\\\\nF0130 13:43:16.810880 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: W0130 13:43:33.688896 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a60502c_d692_40e5_bbb7_d07aaaf80f10.slice/crio-d5ae127e0c112232517505b3ed7827ba25c6e126bafb5f0c5a8d1a0d646cd70b WatchSource:0}: Error finding container d5ae127e0c112232517505b3ed7827ba25c6e126bafb5f0c5a8d1a0d646cd70b: Status 404 returned error can't find the container with id d5ae127e0c112232517505b3ed7827ba25c6e126bafb5f0c5a8d1a0d646cd70b Jan 30 13:43:33 crc kubenswrapper[4793]: W0130 13:43:33.694459 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e8d16db_eb58_4895_8c24_47d6f12b1ea4.slice/crio-ef125e9b2e327da265b22b82b1e4814fd706963ee20814b27cf83602bbc4e5dc WatchSource:0}: Error finding container ef125e9b2e327da265b22b82b1e4814fd706963ee20814b27cf83602bbc4e5dc: Status 404 returned error can't find the container with id ef125e9b2e327da265b22b82b1e4814fd706963ee20814b27cf83602bbc4e5dc Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.698232 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.707769 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.720655 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.741862 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.757802 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.785384 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.806041 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.822714 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.838449 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.859483 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.869273 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.876212 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.885469 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.902100 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.914265 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.914277 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.923869 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: W0130 13:43:33.944014 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9dad744_dcef_4c9e_88b1_3d8d935794a4.slice/crio-a9b874b7613ae7ee9b60270e026bedc8c0a2614d0e9cafd7164ed92899b2cbb0 WatchSource:0}: Error finding container a9b874b7613ae7ee9b60270e026bedc8c0a2614d0e9cafd7164ed92899b2cbb0: Status 404 returned error can't find the container with id a9b874b7613ae7ee9b60270e026bedc8c0a2614d0e9cafd7164ed92899b2cbb0 Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.944237 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.956002 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.966023 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.990013 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990162 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:43:34.990141974 +0000 UTC m=+25.691490465 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.990218 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.990299 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990305 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.990319 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990358 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:34.990342018 +0000 UTC m=+25.691690509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990435 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990482 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:34.990469421 +0000 UTC m=+25.691817912 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990537 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990548 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990558 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990580 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:34.990574384 +0000 UTC m=+25.691922875 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.090732 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:34 crc kubenswrapper[4793]: E0130 13:43:34.090870 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:34 crc kubenswrapper[4793]: E0130 13:43:34.090888 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:34 crc kubenswrapper[4793]: E0130 13:43:34.090903 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:34 crc kubenswrapper[4793]: E0130 13:43:34.090951 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:35.0909364 +0000 UTC m=+25.792284911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.313692 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:45:02.916346742 +0000 UTC Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.405336 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.406717 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.407886 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.409241 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.409937 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.410961 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.411687 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.412301 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.413682 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.414326 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.416650 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.417663 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.420660 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.421590 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.422208 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.422752 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.423446 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.423892 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.424542 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.425245 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.425913 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.426514 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.427022 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.427781 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.428339 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.429006 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.429718 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.432795 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.433730 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.434748 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.435348 4793 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.435467 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.437364 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.438345 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.438845 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.440500 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.441686 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.442296 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.443335 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.444109 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.447114 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.447808 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.448917 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.450010 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.450934 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.451523 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.452477 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.453479 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.454507 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.454996 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.455921 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.456567 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.457188 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.458359 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.572359 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerStarted","Data":"9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.572593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerStarted","Data":"ef125e9b2e327da265b22b82b1e4814fd706963ee20814b27cf83602bbc4e5dc"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.574144 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"731b4709f3f6678be66b16e755d2d8f8debdc9f716e1f6cbc598201980ee2a52"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.575693 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mbqcp" event={"ID":"4a60502c-d692-40e5-bbb7-d07aaaf80f10","Type":"ContainerStarted","Data":"e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.575796 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mbqcp" event={"ID":"4a60502c-d692-40e5-bbb7-d07aaaf80f10","Type":"ContainerStarted","Data":"d5ae127e0c112232517505b3ed7827ba25c6e126bafb5f0c5a8d1a0d646cd70b"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.577316 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" exitCode=0 Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.577383 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.577400 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"483688d83c9fd52a9c7106da5a4bf9f5c29a0ecb4d0a52164165da4e2be17cc3"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.578914 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.578945 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b78b13fed81582e751949091b34bc98c1de835dea70c0882797ffd3ec8f682ae"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.580129 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.580156 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.580168 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"520a78a684ca7b518512886e458b462273f9a3705d5f3e6d09790db4204d11ca"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.581743 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.581771 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.581783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"416a0a57299aa5cb5d7980a5b1d9c2f1f627d9e500c87db6a82e042106ade790"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.583466 4793 scope.go:117] "RemoveContainer" containerID="da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506" Jan 30 13:43:34 crc kubenswrapper[4793]: E0130 13:43:34.583704 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.583846 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerStarted","Data":"4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.583923 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerStarted","Data":"a9b874b7613ae7ee9b60270e026bedc8c0a2614d0e9cafd7164ed92899b2cbb0"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.585738 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.596374 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.606550 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.618662 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.629103 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.645951 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.655617 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.664115 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.685161 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.700197 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.709720 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.720540 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.732257 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.757974 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.774293 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.788419 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.799971 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.816329 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.828012 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.836772 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.845999 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.856465 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.871705 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.881194 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.999831 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:34.999944 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:34.999974 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:43:36.999952974 +0000 UTC m=+27.701301455 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.000002 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.000033 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000009 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000125 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:37.000117208 +0000 UTC m=+27.701465709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000136 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000088 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000289 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000325 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000172 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:37.000162229 +0000 UTC m=+27.701510730 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000424 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:37.000390015 +0000 UTC m=+27.701738566 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.101571 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.101714 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.101730 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.101740 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.101780 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:37.101767985 +0000 UTC m=+27.803116466 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.313874 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:46:48.67389587 +0000 UTC Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.397444 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.397512 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.397523 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.397558 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.397592 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.397655 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.586352 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9dad744-dcef-4c9e-88b1-3d8d935794a4" containerID="4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f" exitCode=0 Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.586437 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerDied","Data":"4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.591443 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.591491 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.591502 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.591512 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.591521 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.591531 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.601838 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.617509 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.632734 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.648337 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.661576 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.674450 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.687413 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.704487 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.725568 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.744958 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.761028 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.783644 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.819173 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.823219 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.825809 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.835730 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.851635 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.874616 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.892128 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.907314 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.918759 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.929659 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.943728 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.957303 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.968759 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.983867 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.994699 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.011461 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.024620 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.041195 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.053487 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.078185 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.096614 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.109373 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.120025 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.132732 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.146714 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.157643 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.170848 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.185552 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.314185 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:03:26.683976757 +0000 UTC Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.595237 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerStarted","Data":"d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f"} Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.597488 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab"} Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.612040 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.627524 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.657875 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.672030 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.693112 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.705713 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.720780 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.753861 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.779264 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.799514 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.814669 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.826254 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.840425 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.856237 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.867776 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.880866 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.891933 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.911720 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.923911 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.938783 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.950113 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.966465 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.979676 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.990613 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.006492 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.019343 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.019436 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.019465 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.019499 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.019615 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.019662 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:41.019649877 +0000 UTC m=+31.720998368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.019975 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:43:41.019966194 +0000 UTC m=+31.721314685 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.020072 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.020087 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.020097 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.020118 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:41.020112078 +0000 UTC m=+31.721460569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.020155 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.020174 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:41.020168819 +0000 UTC m=+31.721517310 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.021587 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.120262 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.120440 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.120474 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.120486 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.120550 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:41.120535515 +0000 UTC m=+31.821884006 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.315135 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:50:50.253154544 +0000 UTC Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.397971 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.398033 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.398127 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.397990 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.398228 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.398328 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.600949 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9dad744-dcef-4c9e-88b1-3d8d935794a4" containerID="d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f" exitCode=0 Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.601012 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerDied","Data":"d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f"} Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.625283 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.644212 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.673831 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.686583 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.696860 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.706770 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.716716 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.729008 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.742468 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.754596 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.768668 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.783001 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.796226 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.922752 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-pxcll"] Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.923144 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.925279 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.925965 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.926395 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.927012 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.935524 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.949568 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.965909 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.978468 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.988995 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.997865 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.008138 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.018584 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.028905 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34045014-77ce-47a5-9a21-a69d9f8cab72-host\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.028935 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g5hv\" (UniqueName: \"kubernetes.io/projected/34045014-77ce-47a5-9a21-a69d9f8cab72-kube-api-access-2g5hv\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.028966 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/34045014-77ce-47a5-9a21-a69d9f8cab72-serviceca\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.031190 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.042007 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.054140 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.068795 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.080952 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.099484 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.129720 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g5hv\" (UniqueName: \"kubernetes.io/projected/34045014-77ce-47a5-9a21-a69d9f8cab72-kube-api-access-2g5hv\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.129796 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/34045014-77ce-47a5-9a21-a69d9f8cab72-serviceca\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.129850 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34045014-77ce-47a5-9a21-a69d9f8cab72-host\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.129923 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34045014-77ce-47a5-9a21-a69d9f8cab72-host\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.131288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/34045014-77ce-47a5-9a21-a69d9f8cab72-serviceca\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.153111 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g5hv\" (UniqueName: \"kubernetes.io/projected/34045014-77ce-47a5-9a21-a69d9f8cab72-kube-api-access-2g5hv\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.236819 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: W0130 13:43:38.252968 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34045014_77ce_47a5_9a21_a69d9f8cab72.slice/crio-29a19415c18336ae54469f1508a1c6a9ebbd5983035cc16b278443e3cb65d7ae WatchSource:0}: Error finding container 29a19415c18336ae54469f1508a1c6a9ebbd5983035cc16b278443e3cb65d7ae: Status 404 returned error can't find the container with id 29a19415c18336ae54469f1508a1c6a9ebbd5983035cc16b278443e3cb65d7ae Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.315432 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:12:12.210687211 +0000 UTC Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.608263 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.611386 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9dad744-dcef-4c9e-88b1-3d8d935794a4" containerID="96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d" exitCode=0 Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.611460 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerDied","Data":"96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.613449 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pxcll" event={"ID":"34045014-77ce-47a5-9a21-a69d9f8cab72","Type":"ContainerStarted","Data":"087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.613475 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pxcll" event={"ID":"34045014-77ce-47a5-9a21-a69d9f8cab72","Type":"ContainerStarted","Data":"29a19415c18336ae54469f1508a1c6a9ebbd5983035cc16b278443e3cb65d7ae"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.647535 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.664980 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.689354 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.707332 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.724206 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.738696 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.755260 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.768521 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.781769 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.793741 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.805801 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.820998 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.835223 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.852842 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.862355 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.864403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.864625 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.864826 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.865187 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.865606 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.873686 4793 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.873872 4793 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.874636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.874656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.874664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.874677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.874685 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:38Z","lastTransitionTime":"2026-01-30T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.886324 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: E0130 13:43:38.893105 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.896359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.896394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.896404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.896420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.896431 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:38Z","lastTransitionTime":"2026-01-30T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.898599 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: E0130 13:43:38.911545 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.911912 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.915196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.915239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.915251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.915269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.915282 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:38Z","lastTransitionTime":"2026-01-30T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:38 crc kubenswrapper[4793]: E0130 13:43:38.930173 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.933591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.933629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.933641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.933662 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.933674 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:38Z","lastTransitionTime":"2026-01-30T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.946564 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: E0130 13:43:38.949389 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.952287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.952359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.952380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.952405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.952429 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:38Z","lastTransitionTime":"2026-01-30T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.961404 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: E0130 13:43:38.965934 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: E0130 13:43:38.966082 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.970847 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.970874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.970883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.970896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.970906 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:38Z","lastTransitionTime":"2026-01-30T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.978482 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.992165 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.029604 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.066684 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.073303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.073341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.073351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.073366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.073378 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.109130 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.151328 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.175153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.175182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.175190 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.175202 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.175211 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.193956 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.233561 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.277589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.277624 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.277635 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.277651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.277661 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.316631 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 11:14:08.13287826 +0000 UTC Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.380605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.380651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.380660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.380676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.380688 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.397630 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:39 crc kubenswrapper[4793]: E0130 13:43:39.397760 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.398243 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:39 crc kubenswrapper[4793]: E0130 13:43:39.398333 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.398406 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:39 crc kubenswrapper[4793]: E0130 13:43:39.398481 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.483406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.483519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.483543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.483571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.483595 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.589159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.589245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.589280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.589310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.589330 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.619083 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerDied","Data":"31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.619077 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9dad744-dcef-4c9e-88b1-3d8d935794a4" containerID="31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13" exitCode=0 Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.644145 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.657147 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.669322 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.682471 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.692329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.692372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.692386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.692404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.692417 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.704417 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.716314 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.726663 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.735261 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.744003 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.753746 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.764795 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.774988 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.786210 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.794319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.794350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.794362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.794378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.794388 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.804567 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.896623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.896653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.896663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.896677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.896687 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.003030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.003068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.003078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.003092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.003100 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.064198 4793 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.107153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.107181 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.107189 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.107202 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.107211 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.210344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.210460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.210477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.210498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.210512 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.312663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.312959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.312969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.312984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.312994 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.316950 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:09:24.300044507 +0000 UTC Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.412893 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.415485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.415523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.415534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.415550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.415561 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.431459 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.448569 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.462648 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.478092 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.489671 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.505226 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.522304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.522349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.522360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.522376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.522387 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.551300 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.571253 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.591406 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.605270 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.618961 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.624189 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.624221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.624231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.624245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.624256 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.627738 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.629815 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerStarted","Data":"d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.635996 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.654870 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.667758 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.679849 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.690112 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.709697 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.726619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.726661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.726673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.726689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.726701 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.727786 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.740516 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.754071 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.769397 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.804928 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.823593 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.829088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.829122 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.829134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.829149 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.829161 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.841770 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.857420 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.872165 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.911929 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.931633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.931667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.931676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.931692 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.931702 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.034294 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.034339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.034356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.034377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.034391 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.059368 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.059490 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.059525 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.059567 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.059648 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.059704 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:49.059688847 +0000 UTC m=+39.761037338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060203 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060221 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060227 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:43:49.060216351 +0000 UTC m=+39.761564842 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060241 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060246 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:49.060240711 +0000 UTC m=+39.761589202 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060254 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060297 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:49.060285832 +0000 UTC m=+39.761634333 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.136976 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.137017 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.137030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.137066 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.137089 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.171367 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.171510 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.171534 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.171546 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.171595 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:49.171582934 +0000 UTC m=+39.872931425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.239589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.239651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.239676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.239704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.239725 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.317318 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:02:22.258654664 +0000 UTC Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.342041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.342100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.342113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.342130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.342142 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.397465 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.397591 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.397761 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.397953 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.398063 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.398624 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.445159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.445206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.445217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.445235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.445247 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.547027 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.547296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.547360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.547429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.547488 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.634025 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.649876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.649909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.649922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.649964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.649976 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.661086 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.674523 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.676317 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.689656 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.701250 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.714503 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.730608 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.744835 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.752380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.752431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.752440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.752463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.752473 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.758220 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.770472 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.787313 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.800011 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.812216 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.827964 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.848126 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.854767 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.854794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.854803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.854816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.854826 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.864971 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.876333 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.898324 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.920397 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.935553 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.947230 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.957785 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.958250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.958290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.958307 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.958327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.958343 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.968864 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.987432 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.998823 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.011690 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.024510 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.036957 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.048659 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.060704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.060747 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.060759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.060778 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.060792 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.163162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.163194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.163204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.163218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.163229 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.266162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.266193 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.266204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.266219 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.266230 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.317822 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:32:03.169814322 +0000 UTC Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.368369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.368402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.368428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.368443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.368454 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.471927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.471959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.471969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.471983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.471994 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.575447 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.575518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.575541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.575565 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.575582 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.640384 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9dad744-dcef-4c9e-88b1-3d8d935794a4" containerID="d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97" exitCode=0 Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.640839 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.641208 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerDied","Data":"d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.641514 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.660668 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.670987 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.675486 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.678476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.678512 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.678520 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.678536 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.678545 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.688855 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.701349 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.713982 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.724857 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.733994 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.744255 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.764815 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.774115 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.780687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.780709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.780717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.780729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.780738 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.787241 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.800924 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.812773 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.824754 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.837750 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.848340 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.861024 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.875475 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.884913 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.884951 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.884964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.884980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.884991 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.889844 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.901718 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.910707 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.921608 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.958253 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.988074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.988118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.988126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.988142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.988151 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.992662 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.030889 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.069919 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.089956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.089994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.090007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.090023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.090034 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.114233 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.150295 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.192456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.192497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.192505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.192521 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.192530 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.294934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.294980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.294993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.295011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.295024 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.318576 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:55:13.140489339 +0000 UTC Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397279 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397343 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397596 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397643 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:43 crc kubenswrapper[4793]: E0130 13:43:43.397710 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397775 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:43 crc kubenswrapper[4793]: E0130 13:43:43.397977 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:43 crc kubenswrapper[4793]: E0130 13:43:43.397998 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.499780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.499831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.499843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.499861 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.499874 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.602075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.602131 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.602145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.602166 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.602189 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.646940 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9dad744-dcef-4c9e-88b1-3d8d935794a4" containerID="3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866" exitCode=0 Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.647106 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.647783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerDied","Data":"3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.669631 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.688080 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.698461 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.704552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.704591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.704602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.704619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.704630 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.714094 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.727031 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.737730 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.753144 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.766551 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.778953 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.791269 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.801272 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.806535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.806567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.806576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.806596 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.806607 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.813565 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.825957 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.841575 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.909302 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.909333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.909343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.909357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.909367 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.010965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.010998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.011006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.011021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.011032 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.113036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.113131 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.113140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.113154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.113163 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.215382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.215448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.215457 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.215472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.215481 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.317509 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.317550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.317559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.317573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.317583 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.319669 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:37:54.265400165 +0000 UTC Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.419351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.419615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.419623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.419636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.419644 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.521601 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.521656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.521667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.521691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.521707 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.595640 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr"] Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.596113 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.597757 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.597969 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.609797 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.623769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.623799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.623808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.623820 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.623831 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.625876 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.638969 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.651421 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerStarted","Data":"1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.651485 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.653515 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.662632 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.681763 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.699435 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.704540 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.704599 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.704643 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.704763 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lsdl\" (UniqueName: \"kubernetes.io/projected/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-kube-api-access-5lsdl\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.711030 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.725351 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.727282 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.727348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.727365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.727387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.727402 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.741823 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.754852 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.766030 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.777132 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.789466 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.801702 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.805647 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lsdl\" (UniqueName: \"kubernetes.io/projected/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-kube-api-access-5lsdl\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.805681 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.805722 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.805752 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.806484 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.807304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.812831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.815179 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.822461 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lsdl\" (UniqueName: \"kubernetes.io/projected/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-kube-api-access-5lsdl\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.827345 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.830558 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.830582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.830598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.830613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.830623 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.838527 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.849682 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.860404 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.873069 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.883856 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.895750 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.907847 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.908066 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.922858 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.932531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.932560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.932572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.932588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.932601 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.940220 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.960647 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.974267 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.988366 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.007468 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.035450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.035490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.035501 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.035522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.035533 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.137726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.137765 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.137775 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.137791 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.137804 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.241176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.241203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.241211 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.241224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.241234 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.320704 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:19:30.962127846 +0000 UTC Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.343536 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.343760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.343773 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.343786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.343795 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.397238 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:45 crc kubenswrapper[4793]: E0130 13:43:45.397610 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.398107 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.398289 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:45 crc kubenswrapper[4793]: E0130 13:43:45.398454 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:45 crc kubenswrapper[4793]: E0130 13:43:45.398578 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.446653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.446685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.446697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.446712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.446724 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.548337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.548381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.548392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.548409 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.548418 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.651293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.651324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.651333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.651349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.651360 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.661679 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/0.log" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.664455 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52" exitCode=1 Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.664523 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.665502 4793 scope.go:117] "RemoveContainer" containerID="d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.665725 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" event={"ID":"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01","Type":"ContainerStarted","Data":"07c07021edcccf8ce4d7cc581816d1ce648b86a1379f988ab98458bd8d7c53bd"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.687926 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.703428 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.718453 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.731641 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.740780 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.753459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.753494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.753507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.753523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.753533 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.754588 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.766448 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.778980 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.794156 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.810689 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.829851 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.839988 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.848472 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.855955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.856164 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.856240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.856313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.856372 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.860230 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.878163 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.959277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.959342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.959358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.959784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.959840 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.054224 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xfcvw"] Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.054696 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: E0130 13:43:46.054769 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.063539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.063583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.063660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.063688 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.063702 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.075532 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.088220 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.099341 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.112825 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.119095 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.119165 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl5wx\" (UniqueName: \"kubernetes.io/projected/3401bbdc-090b-402b-bf7b-a4a823182946-kube-api-access-cl5wx\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.135015 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.147540 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.161544 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.166847 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.166998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.167104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.167196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.167274 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.176572 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.191401 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.205250 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.218712 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.220300 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.220421 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl5wx\" (UniqueName: \"kubernetes.io/projected/3401bbdc-090b-402b-bf7b-a4a823182946-kube-api-access-cl5wx\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: E0130 13:43:46.220559 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:46 crc kubenswrapper[4793]: E0130 13:43:46.220689 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:46.720656287 +0000 UTC m=+37.422004828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.235128 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.241499 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl5wx\" (UniqueName: \"kubernetes.io/projected/3401bbdc-090b-402b-bf7b-a4a823182946-kube-api-access-cl5wx\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.247875 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.257620 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.270155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.270203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.270216 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.270236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.270250 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.275222 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.295131 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.321316 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:37:54.276331783 +0000 UTC Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.372299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.372507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.372612 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.372704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.372777 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.398826 4793 scope.go:117] "RemoveContainer" containerID="da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.475549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.475848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.476071 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.476234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.476375 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.579316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.579612 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.579775 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.579929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.580182 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.675745 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" event={"ID":"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01","Type":"ContainerStarted","Data":"d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.682533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.682572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.682583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.682600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.682612 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.726362 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: E0130 13:43:46.726523 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:46 crc kubenswrapper[4793]: E0130 13:43:46.726588 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:47.726571658 +0000 UTC m=+38.427920159 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.785654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.785697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.785707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.785722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.785733 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.887543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.887582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.887590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.887606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.887614 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.990522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.990567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.990577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.990591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.990601 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.092591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.092635 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.092645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.092662 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.092671 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.194719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.194757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.194766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.194780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.194790 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.297256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.297285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.297294 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.297334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.297346 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.322105 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:33:00.850173464 +0000 UTC Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.397408 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.397433 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.397513 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.397509 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:47 crc kubenswrapper[4793]: E0130 13:43:47.397608 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:47 crc kubenswrapper[4793]: E0130 13:43:47.397688 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:47 crc kubenswrapper[4793]: E0130 13:43:47.397767 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:47 crc kubenswrapper[4793]: E0130 13:43:47.397866 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.399518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.399552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.399566 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.399582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.399594 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.501300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.501342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.501355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.501371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.501383 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.603895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.604316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.604336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.604356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.604370 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.685564 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/0.log" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.689669 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.706648 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.707187 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.707199 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.707218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.707242 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.737710 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:47 crc kubenswrapper[4793]: E0130 13:43:47.737903 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:47 crc kubenswrapper[4793]: E0130 13:43:47.737991 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:49.737969677 +0000 UTC m=+40.439318248 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.809761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.809803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.809813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.809827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.809837 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.911711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.911747 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.911756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.911775 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.911785 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.014882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.014944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.014965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.014989 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.015006 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.117228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.117287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.117303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.117328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.117348 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.219761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.219800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.219809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.219824 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.219834 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.322368 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:24:04.869969193 +0000 UTC Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.323674 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.323718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.323732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.323750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.323767 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.427354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.427567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.427664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.427736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.427800 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.530096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.530400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.530468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.530543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.530600 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.632923 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.633578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.633668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.633778 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.633990 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.693958 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.695482 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.696130 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.698509 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.699247 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" event={"ID":"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01","Type":"ContainerStarted","Data":"f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.727345 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.744673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.744713 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.744724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.744741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.744752 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.747439 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.763473 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.773514 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.785763 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.796798 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.810520 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.821265 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.833076 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.846566 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.846599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.846608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.846623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.846633 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.850500 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.858842 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.867023 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.878345 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.888901 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.898519 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.908446 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.920062 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.935820 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.946450 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.948797 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.948840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.948850 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.948868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.948879 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.960219 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.981211 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.994481 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.006890 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.019685 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.036430 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.051339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.051564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.051649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.051735 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.051785 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.051819 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.064340 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.076904 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.092386 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.106283 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.118539 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.134165 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.151318 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.151431 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.151468 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.151486 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.151582 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.151626 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:05.15161408 +0000 UTC m=+55.852962571 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.151945 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.151970 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.151992 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.152063 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:44:05.1520154 +0000 UTC m=+55.853363881 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.152127 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:05.152109142 +0000 UTC m=+55.853457653 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.152247 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.152448 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:05.15242718 +0000 UTC m=+55.853775671 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.153576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.153606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.153615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.153630 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.153642 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.200378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.200441 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.200453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.200469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.200479 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.212881 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.216418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.216458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.216468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.216482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.216493 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.229692 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.233404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.233437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.233448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.233465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.233475 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.248616 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.251262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.251372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.251450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.252320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.252385 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.251844 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.251955 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.252726 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.252737 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.252781 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:05.252767066 +0000 UTC m=+55.954115557 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.264691 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.268367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.268523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.268634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.268701 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.268770 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.282006 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.282173 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.283650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.283694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.283705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.283722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.283737 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.323238 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 03:12:47.723174589 +0000 UTC Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.386419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.386465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.386474 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.386490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.386501 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.397999 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.398075 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.398157 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.398010 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.398294 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.398336 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.398380 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.398426 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.489521 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.489969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.490075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.490165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.490263 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.592188 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.592225 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.592236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.592251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.592263 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.694683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.695008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.695143 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.695235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.695344 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.715862 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.730085 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.745774 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.756733 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.756925 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.756990 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:53.756973453 +0000 UTC m=+44.458322014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.759754 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.770188 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.783014 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.797957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.798235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.798347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.798476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.798657 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.802260 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.812950 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.825284 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.840977 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.854580 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.867976 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.877766 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.889330 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.901215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.901253 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.901262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.901689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.901705 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.904884 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.915802 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.004264 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.004312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.004324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.004350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.004361 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.106015 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.106086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.106100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.106139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.106152 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.208960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.209009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.209022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.209040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.209075 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.311754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.311794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.311805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.311818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.311828 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.324293 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:22:55.449154957 +0000 UTC Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.413011 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.413886 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.413932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.413949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.413964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.413980 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.426104 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.436452 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.446114 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.470618 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.487020 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.499586 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.515079 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.516508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.516559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.516570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.516587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.516598 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.529248 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.541695 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.551072 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.562171 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.572925 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.582936 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.593571 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.607646 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.618972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.618998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.619008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.619023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.619035 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.720557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.720594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.720605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.720621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.720633 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.822886 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.823271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.823363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.823455 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.823554 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.926200 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.926487 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.926557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.926634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.926706 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.029331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.029622 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.029707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.029800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.029907 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.131814 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.131857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.131865 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.131880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.131890 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.234525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.234604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.234619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.234638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.235337 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.255725 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.325318 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:13:44.236169012 +0000 UTC Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.338066 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.338103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.338113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.338130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.338143 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.397923 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.397973 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.398025 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:51 crc kubenswrapper[4793]: E0130 13:43:51.398087 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.398135 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:51 crc kubenswrapper[4793]: E0130 13:43:51.398261 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:51 crc kubenswrapper[4793]: E0130 13:43:51.398369 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:51 crc kubenswrapper[4793]: E0130 13:43:51.398466 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.441386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.441683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.441757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.441839 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.441912 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.544258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.544508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.544615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.544709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.544791 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.647275 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.647323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.647334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.647350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.647363 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.750312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.750361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.750375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.750397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.750410 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.853584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.853651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.853673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.853704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.853728 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.955749 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.955789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.955807 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.955829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.955846 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.058030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.058349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.058440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.058510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.058599 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.161568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.161626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.161642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.161664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.161680 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.264346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.264421 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.264434 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.264454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.264467 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.325696 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:28:16.692696062 +0000 UTC Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.366832 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.367154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.367257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.367359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.367441 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.469819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.470102 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.470114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.470133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.470150 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.573192 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.573244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.573256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.573272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.573286 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.676311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.676403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.676416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.676431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.676440 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.779284 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.779319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.779328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.779342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.779351 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.882337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.882376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.882387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.882400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.882409 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.984446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.984484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.984493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.984510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.984519 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.087023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.087072 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.087085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.087100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.087111 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.193599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.193633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.193641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.193656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.193674 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.296241 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.296282 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.296292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.296305 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.296314 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.326697 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:01:14.492607975 +0000 UTC Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.397291 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:53 crc kubenswrapper[4793]: E0130 13:43:53.397441 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.397500 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:53 crc kubenswrapper[4793]: E0130 13:43:53.397667 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.397500 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:53 crc kubenswrapper[4793]: E0130 13:43:53.397806 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.397952 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:53 crc kubenswrapper[4793]: E0130 13:43:53.398190 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.398293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.398462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.398573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.398744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.398878 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.501928 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.501994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.502013 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.502040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.502096 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.604896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.605250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.605339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.605443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.605535 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.708277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.708354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.708376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.708404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.708424 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.801549 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:53 crc kubenswrapper[4793]: E0130 13:43:53.801923 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:53 crc kubenswrapper[4793]: E0130 13:43:53.802193 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:01.802164549 +0000 UTC m=+52.503513080 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.811139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.811182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.811199 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.811223 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.811240 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.913917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.913950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.913961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.913975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.913986 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.015940 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.015994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.016006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.016023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.016067 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.118070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.118098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.118108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.118122 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.118132 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.221076 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.221112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.221123 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.221137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.221149 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.323719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.323771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.323781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.323805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.323818 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.327197 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:30:05.65750785 +0000 UTC Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.427014 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.427107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.427118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.427132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.427166 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.530039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.530369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.530387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.530410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.530431 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.633020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.633091 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.633102 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.633117 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.633128 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.736207 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.736260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.736269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.736284 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.736297 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.839682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.839740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.839762 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.839792 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.839814 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.942443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.942484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.942493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.942508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.942519 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.045486 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.045535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.045545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.045564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.045574 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.147946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.148238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.148310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.148399 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.148484 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.251129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.251598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.251762 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.251931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.252120 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.327888 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:12:38.336491925 +0000 UTC Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.354691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.354955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.355126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.355256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.355377 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.398128 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.398265 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.398297 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.398318 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:55 crc kubenswrapper[4793]: E0130 13:43:55.399196 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:55 crc kubenswrapper[4793]: E0130 13:43:55.399317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:55 crc kubenswrapper[4793]: E0130 13:43:55.399421 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:55 crc kubenswrapper[4793]: E0130 13:43:55.399493 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.458009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.458083 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.458100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.458122 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.458136 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.561360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.561510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.561541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.561598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.561622 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.665257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.665321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.665330 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.665346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.665356 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.768784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.768841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.768860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.768885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.768902 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.872040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.872111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.872123 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.872139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.872153 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.975533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.975594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.975613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.975637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.975653 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.078639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.078822 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.078842 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.078903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.078921 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.182103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.182145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.182155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.182171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.182181 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.285094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.285157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.285172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.285195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.285252 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.328784 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:05:03.48515477 +0000 UTC Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.388169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.388204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.388212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.388225 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.388233 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.490361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.490395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.490406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.490420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.490432 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.592723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.592771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.592787 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.592810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.592826 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.695717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.695768 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.695785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.695805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.695819 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.798417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.798460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.798472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.798490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.798501 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.901534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.901566 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.901578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.901592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.901601 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.004328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.004375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.004390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.004408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.004423 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.107503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.107566 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.107581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.107607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.107623 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.211641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.211722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.211740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.211765 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.211782 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.315934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.315976 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.315987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.316005 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.316016 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.329456 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:55:35.229159097 +0000 UTC Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.397617 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.397716 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.397782 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.397724 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:57 crc kubenswrapper[4793]: E0130 13:43:57.397912 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:57 crc kubenswrapper[4793]: E0130 13:43:57.398037 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:57 crc kubenswrapper[4793]: E0130 13:43:57.398242 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:57 crc kubenswrapper[4793]: E0130 13:43:57.398473 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.419249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.419299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.419497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.419539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.419557 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.522706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.522766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.522782 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.522805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.522823 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.625688 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.625769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.625789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.625817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.625838 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.729524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.729883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.730090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.730289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.730472 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.833815 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.834016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.834091 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.834125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.834148 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.936548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.936596 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.936607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.936623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.936636 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.031380 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.040337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.040429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.040443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.040463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.040478 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.053402 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.088901 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.113408 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.135861 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.142406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.142601 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.142699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.142806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.142926 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.154585 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.171381 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.187282 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.203786 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.214898 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.225250 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.238430 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.245451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.245496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.245505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.245520 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.245530 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.254154 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.269315 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.280717 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.293317 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.310372 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.330628 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 23:19:32.911810827 +0000 UTC Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.347128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.347162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.347172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.347190 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.347200 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.450907 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.450968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.450984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.451008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.451027 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.553202 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.553444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.553555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.553643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.553713 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.657365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.657716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.657985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.658261 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.658483 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.760885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.761306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.761496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.761693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.761865 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.865018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.865133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.865182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.865205 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.865224 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.967984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.968038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.968076 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.968094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.968130 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.071243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.071296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.071317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.071346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.071367 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.174361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.174411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.174422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.174437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.174448 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.278268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.278342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.278365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.278396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.278419 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.297604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.297669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.297691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.297719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.297739 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.314315 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.318929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.318968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.318979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.318997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.319012 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.331374 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:59:39.453971504 +0000 UTC Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.335752 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.341130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.341221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.341248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.341279 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.341301 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.358848 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.364917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.364961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.364970 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.364983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.364992 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.383007 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.387725 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.388856 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.388866 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.388882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.388891 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.397345 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.397437 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.397465 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.397543 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.397378 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.397687 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.397816 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.397902 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.415716 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.415882 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.417980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.418011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.418022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.418040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.418069 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.519671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.519734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.519742 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.519756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.519764 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.622819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.622851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.622859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.622872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.622881 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.725577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.725638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.725649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.725667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.725681 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.827797 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.827877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.827903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.827932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.827951 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.929935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.929987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.930001 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.930023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.930037 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.033136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.033195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.033215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.033244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.033265 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.136565 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.136614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.136631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.136650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.136664 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.240977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.241030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.241078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.241099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.241112 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.332528 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 03:46:24.223670814 +0000 UTC Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.343379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.343443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.343454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.343471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.343481 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.409642 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.423642 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.439904 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.446273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.446366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.446415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.446431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.446439 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.454223 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.472363 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.496234 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.509362 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.524837 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.537970 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.549260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.549303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.549312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.549326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.549335 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.551405 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.565507 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.577195 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.590480 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.602873 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.612706 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.622542 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.624358 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.643074 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.652538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.652580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.652590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.652605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.652617 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.655599 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.666775 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.678631 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.689301 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.700302 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.709997 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.720626 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.732501 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.743824 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.753262 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.754369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.754393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.754403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.754442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.754452 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.765625 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.777677 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.795001 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.809996 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.821911 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.833640 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.855338 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.857161 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.857193 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.857245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.857265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.857277 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.959888 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.959944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.959954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.959967 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.959989 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.062692 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.062734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.062776 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.062794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.062807 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.166623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.166701 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.166719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.166744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.166765 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.269121 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.269160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.269169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.269184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.269194 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.333331 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:36:40.916979874 +0000 UTC Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.371243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.371283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.371295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.371312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.371324 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.398184 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.398241 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.398279 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:01 crc kubenswrapper[4793]: E0130 13:44:01.398321 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.398184 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:01 crc kubenswrapper[4793]: E0130 13:44:01.398548 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:01 crc kubenswrapper[4793]: E0130 13:44:01.398672 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:01 crc kubenswrapper[4793]: E0130 13:44:01.398774 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.485204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.485237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.485247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.485263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.485278 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.588581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.588653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.588667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.588690 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.588707 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.691513 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.691565 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.691583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.691607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.691624 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.794774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.794831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.794848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.794871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.794888 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.888468 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:01 crc kubenswrapper[4793]: E0130 13:44:01.888658 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:44:01 crc kubenswrapper[4793]: E0130 13:44:01.889303 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:17.889276683 +0000 UTC m=+68.590625214 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.898437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.898857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.899159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.899372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.899571 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.002859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.002909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.002924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.002946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.002962 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.106662 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.106727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.106757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.106779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.106810 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.208766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.208810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.208821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.208834 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.208843 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.311592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.311634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.311645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.311672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.311682 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.334278 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:18:17.426581611 +0000 UTC Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.414695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.414750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.414762 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.414786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.414803 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.517245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.517277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.517286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.517300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.517310 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.620240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.620556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.620746 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.620855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.620926 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.723855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.724199 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.724332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.724450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.724565 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.827296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.827329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.827342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.827357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.827366 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.930679 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.931029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.931263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.931461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.931634 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.034477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.034704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.034784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.034863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.034976 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.137343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.137397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.137411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.137430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.137444 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.241224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.241268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.241280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.241316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.241330 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.334680 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 22:31:47.357520775 +0000 UTC Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.343919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.344113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.344239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.344329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.344403 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.398319 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.398334 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.398441 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:03 crc kubenswrapper[4793]: E0130 13:44:03.398533 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.398591 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:03 crc kubenswrapper[4793]: E0130 13:44:03.398749 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:03 crc kubenswrapper[4793]: E0130 13:44:03.398906 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:03 crc kubenswrapper[4793]: E0130 13:44:03.398974 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.448232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.448604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.448780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.448984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.449210 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.551561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.551607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.551619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.551637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.551649 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.654997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.655094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.655113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.655139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.655158 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.757864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.757901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.757910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.757925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.757935 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.859900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.860224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.860296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.860360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.860414 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.962763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.962829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.962840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.962856 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.962870 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.068840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.068882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.068893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.068910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.068922 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.171845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.171898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.171913 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.171929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.171941 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.274993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.275037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.275074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.275094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.275111 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.335451 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:04:08.725203138 +0000 UTC Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.377237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.377263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.377271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.377283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.377292 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.479742 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.479805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.479818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.479834 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.479844 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.582429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.582479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.582494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.582515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.582531 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.685323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.685371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.685385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.685405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.685420 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.787481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.787528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.787540 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.787558 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.787571 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.890613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.890710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.890731 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.890756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.890773 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.993789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.993858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.993875 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.993899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.993916 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.096398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.096431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.096440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.096451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.096478 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.204029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.204117 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.204131 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.204149 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.204162 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.222102 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222271 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:44:37.222247223 +0000 UTC m=+87.923595714 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.222320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.222351 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.222396 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222483 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222490 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222524 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222561 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222589 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:37.222579871 +0000 UTC m=+87.923928352 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222599 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222634 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:37.222612141 +0000 UTC m=+87.923960672 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222707 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:37.222681053 +0000 UTC m=+87.924029584 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.306479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.306510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.306521 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.306534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.306543 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.323152 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.323376 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.323413 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.323428 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.323504 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:37.32348303 +0000 UTC m=+88.024831531 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.336771 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:15:23.055875607 +0000 UTC Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.397474 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.397585 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.397608 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.397634 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.397483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.397723 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.397797 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.397876 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.408980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.409056 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.409074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.409092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.409124 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.511243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.511288 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.511301 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.511318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.511331 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.614868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.614945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.614984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.615015 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.615038 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.722037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.722185 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.722198 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.722214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.722226 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.825821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.825879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.825895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.825921 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.825939 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.928098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.928169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.928183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.928210 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.928219 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.030962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.031108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.031141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.031170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.031191 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.133509 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.133552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.133563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.133579 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.133592 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.236865 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.236909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.236924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.236949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.236965 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.337278 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:19:47.088168386 +0000 UTC Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.339367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.339434 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.339455 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.339479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.339497 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.442104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.442144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.442174 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.442189 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.442199 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.546606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.546673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.546686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.546701 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.546713 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.649537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.649618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.649639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.649666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.649691 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.752481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.752532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.752563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.752588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.752616 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.855391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.855466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.855490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.855522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.855544 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.958618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.958660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.958671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.958687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.958699 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.061251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.061314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.061331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.061355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.061372 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.164331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.164385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.164403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.164428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.164444 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.267233 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.267283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.267301 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.267323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.267339 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.337916 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:34:29.675908603 +0000 UTC Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.369830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.369884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.369897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.369916 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.369930 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.397272 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.397350 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:07 crc kubenswrapper[4793]: E0130 13:44:07.397502 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.397570 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.397565 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:07 crc kubenswrapper[4793]: E0130 13:44:07.397643 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:07 crc kubenswrapper[4793]: E0130 13:44:07.397727 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:07 crc kubenswrapper[4793]: E0130 13:44:07.397798 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.471972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.472011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.472020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.472036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.472060 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.574472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.574537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.574549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.574563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.574572 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.676965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.677031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.677077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.677101 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.677117 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.778877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.778913 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.778926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.778941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.778953 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.881254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.881322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.881336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.881379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.881393 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.983881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.983921 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.983931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.983945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.983956 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.086334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.086369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.086378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.086390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.086399 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.188825 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.188898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.188921 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.188949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.188970 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.291878 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.291947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.291968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.291990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.292006 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.338117 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:32:35.834755046 +0000 UTC Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.394695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.394761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.394771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.394813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.394827 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.496815 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.496900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.496927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.496958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.497001 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.599417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.599474 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.599485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.599500 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.599512 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.701325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.701361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.701374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.701387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.701396 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.766556 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/1.log" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.767167 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/0.log" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.768975 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f" exitCode=1 Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.769006 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.769036 4793 scope.go:117] "RemoveContainer" containerID="d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.769600 4793 scope.go:117] "RemoveContainer" containerID="ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f" Jan 30 13:44:08 crc kubenswrapper[4793]: E0130 13:44:08.769722 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.784706 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.797014 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.804572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.804593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.804602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.804616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.804624 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.807678 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.818200 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.835447 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:08Z\\\",\\\"message\\\":\\\"found while processing openshift-etcd-operator/etcd-operator-b45778765-zrj8g: failed to check if pod openshift-etcd-operator/etcd-operator-b45778765-zrj8g is in primary UDN: could not find OVN pod annotation in map[]\\\\nI0130 13:44:08.535135 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-mnzcq: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-mnzcq is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI0130 13:44:08.535148 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-service-ca/service-ca-9c57cc56f-n9v6k: failed to check if pod openshift-service-ca/service-ca-9c57cc56f-n9v6k is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nE0130 13:44:08.602321 6172 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0130 13:44:08.603514 6172 ovnkube.go:599] Stopped ovnkube\\\\nI0130 13:44:08.603573 6172 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.847369 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.859515 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.871018 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.883509 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.894798 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.907234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.907268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.907277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.907290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.907298 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.908441 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.920426 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.933480 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.947706 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.959560 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.977538 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.992944 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.010373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.010498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.010669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.010816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.010953 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.113857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.113897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.113906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.113919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.113927 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.217600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.217657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.217678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.217700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.217716 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.321741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.321809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.321829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.321858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.321876 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.338305 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:33:39.847985784 +0000 UTC Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.397259 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.397346 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.397410 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.397528 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.397277 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.397622 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.397918 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.398208 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.424873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.424919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.424930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.424948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.424961 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.527482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.527793 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.528085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.528198 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.528282 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.630382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.630418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.630427 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.630442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.630451 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.733034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.733390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.733546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.733657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.733762 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.755105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.755153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.755164 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.755181 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.755195 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.771590 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.774486 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/1.log" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.777216 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.777245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.777256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.777271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.777282 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.791760 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.795988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.796036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.796076 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.796098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.796112 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.808987 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.812642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.812668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.812677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.812689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.812698 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.823632 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.826833 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.826977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.827039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.827147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.827246 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.840212 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.840325 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.841643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.841671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.841695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.841711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.841722 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.944384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.944422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.944446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.944460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.944469 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.047482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.047546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.047567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.047614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.047636 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.151415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.151450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.151462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.151479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.151493 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.253846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.253914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.253922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.253935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.253944 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.338681 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:43:27.26465699 +0000 UTC Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.356307 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.356375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.356396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.356423 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.356444 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.414357 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.430880 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.443733 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.458004 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.459695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.459734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.459751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.459771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.459783 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.486194 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:08Z\\\",\\\"message\\\":\\\"found while processing openshift-etcd-operator/etcd-operator-b45778765-zrj8g: failed to check if pod openshift-etcd-operator/etcd-operator-b45778765-zrj8g is in primary UDN: could not find OVN pod annotation in map[]\\\\nI0130 13:44:08.535135 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-mnzcq: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-mnzcq is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI0130 13:44:08.535148 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-service-ca/service-ca-9c57cc56f-n9v6k: failed to check if pod openshift-service-ca/service-ca-9c57cc56f-n9v6k is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nE0130 13:44:08.602321 6172 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0130 13:44:08.603514 6172 ovnkube.go:599] Stopped ovnkube\\\\nI0130 13:44:08.603573 6172 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.501565 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.514764 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.527939 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.540192 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.551425 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.563178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.563212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.563221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.563239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.563252 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.563913 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.573591 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.583635 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.594892 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.604924 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.621760 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.633202 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.665454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.665722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.665848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.665946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.666029 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.768964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.769196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.769267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.769332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.769416 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.873520 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.873760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.873771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.873786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.873796 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.976424 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.976686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.976955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.977038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.977166 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.080626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.081114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.081264 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.081380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.081515 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.184980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.185092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.185104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.185135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.185146 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.287497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.287828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.287957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.288122 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.288455 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.338826 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:37:14.813778487 +0000 UTC Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.391164 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.391776 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.391848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.391957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.392039 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.397383 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.397395 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.397421 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:11 crc kubenswrapper[4793]: E0130 13:44:11.397737 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:11 crc kubenswrapper[4793]: E0130 13:44:11.397581 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.397421 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:11 crc kubenswrapper[4793]: E0130 13:44:11.397835 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:11 crc kubenswrapper[4793]: E0130 13:44:11.397936 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.495598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.495650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.495668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.495692 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.495708 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.599603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.599990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.600263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.600477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.600652 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.704621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.704693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.704715 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.704744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.704765 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.809410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.809880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.810039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.810218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.810347 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.913175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.914199 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.914237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.914260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.914275 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.016994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.017031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.017064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.017080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.017095 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.119939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.119993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.120017 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.120041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.120150 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.222270 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.222325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.222336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.222354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.222367 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.325733 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.325786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.325796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.325812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.325823 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.339972 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:24:47.465471492 +0000 UTC Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.428511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.428553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.428567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.428605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.428617 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.531358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.531403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.531414 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.531427 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.531439 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.633626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.633717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.633740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.633771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.633783 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.736303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.736575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.736657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.736738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.736802 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.839244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.839327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.839337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.839360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.839376 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.942133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.942160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.942168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.942180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.942189 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.043929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.043956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.043964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.044023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.044033 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.147238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.147274 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.147286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.147302 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.147314 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.249081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.249114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.249129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.249176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.249194 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.340319 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:38:23.07676149 +0000 UTC Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.351040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.351097 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.351106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.351118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.351127 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.398304 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.398348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.398369 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:13 crc kubenswrapper[4793]: E0130 13:44:13.398442 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:13 crc kubenswrapper[4793]: E0130 13:44:13.398591 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:13 crc kubenswrapper[4793]: E0130 13:44:13.398677 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.398733 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:13 crc kubenswrapper[4793]: E0130 13:44:13.398796 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.453831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.453883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.453899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.453919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.453933 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.555828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.555869 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.555882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.555899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.555912 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.659478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.659509 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.659519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.659532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.659541 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.762472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.762506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.762514 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.762528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.762537 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.865845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.865893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.865904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.865920 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.865938 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.969387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.969460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.969478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.969502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.969516 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.071915 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.071983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.071995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.072010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.072021 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.175177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.175454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.175544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.175650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.175750 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.278254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.278295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.278304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.278318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.278327 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.341307 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:49:38.53161922 +0000 UTC Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.381374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.381452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.381464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.381488 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.381505 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.484482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.484526 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.484535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.484585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.484596 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.586672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.586727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.586741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.586763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.586787 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.688889 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.688927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.688936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.688950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.688959 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.791463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.791520 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.791529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.791544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.791553 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.895136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.895206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.895218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.895232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.895243 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.997497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.997538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.997549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.997564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.997576 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.100459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.100523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.100539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.100564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.100581 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.203532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.203587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.203601 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.203621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.203636 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.306025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.306083 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.306092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.306108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.306118 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.341753 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:43:01.917877761 +0000 UTC Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.397802 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.397845 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.397853 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.397866 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:15 crc kubenswrapper[4793]: E0130 13:44:15.397960 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:15 crc kubenswrapper[4793]: E0130 13:44:15.398040 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:15 crc kubenswrapper[4793]: E0130 13:44:15.398118 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:15 crc kubenswrapper[4793]: E0130 13:44:15.398189 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.408619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.408667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.408680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.408697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.408710 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.510683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.510730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.510740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.510752 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.510761 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.613141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.613202 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.613218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.613241 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.613260 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.715281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.715342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.715355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.715372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.715383 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.818000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.818073 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.818092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.818108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.818124 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.920367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.920393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.920401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.920413 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.920423 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.022436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.022469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.022479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.022492 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.022502 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.124103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.124140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.124152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.124165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.124174 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.226023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.226074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.226084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.226097 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.226105 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.328093 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.328128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.328139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.328155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.328167 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.342167 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:41:36.723992576 +0000 UTC Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.430478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.430736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.430803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.430864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.430936 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.533079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.533532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.533740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.533820 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.533881 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.636487 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.636778 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.636882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.636955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.637012 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.739738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.739941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.740126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.740194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.740255 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.842023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.842082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.842094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.842110 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.842122 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.945010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.945068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.945081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.945098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.945112 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.047176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.047208 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.047216 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.047228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.047238 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.149950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.150002 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.150017 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.150033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.150070 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.251991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.252027 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.252038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.252072 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.252084 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.343107 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:59:16.047852147 +0000 UTC Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.354712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.354949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.355069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.355165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.355274 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.398110 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.398183 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.398180 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.398211 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:17 crc kubenswrapper[4793]: E0130 13:44:17.398262 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:17 crc kubenswrapper[4793]: E0130 13:44:17.398306 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:17 crc kubenswrapper[4793]: E0130 13:44:17.398431 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:17 crc kubenswrapper[4793]: E0130 13:44:17.398570 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.458794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.458837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.458848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.458864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.458877 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.560593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.560631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.560641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.560657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.560667 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.663351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.663393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.663404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.663420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.663433 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.765403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.765505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.765519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.765535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.765545 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.867275 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.867315 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.867325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.867340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.867352 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.945296 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:17 crc kubenswrapper[4793]: E0130 13:44:17.945410 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:44:17 crc kubenswrapper[4793]: E0130 13:44:17.945477 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:49.945457312 +0000 UTC m=+100.646805803 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.969533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.969573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.969587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.969603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.969613 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.073621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.073658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.073669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.073695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.073715 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.176694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.176911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.176978 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.177037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.177132 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.279032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.279100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.279112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.279127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.279137 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.343559 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:36:24.538034173 +0000 UTC Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.381068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.381211 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.381276 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.381363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.381433 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.483972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.484005 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.484015 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.484029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.484038 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.586184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.586215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.586224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.586241 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.586251 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.689150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.689194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.689206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.689226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.689238 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.792236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.792287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.792310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.792343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.792364 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.895483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.895550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.895572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.895601 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.895622 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.997415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.997444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.997453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.997466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.997474 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.100356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.100397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.100410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.100428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.100440 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.202763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.202800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.202811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.202838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.202848 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.304950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.305364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.305555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.305691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.305824 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.344682 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:54:25.76134155 +0000 UTC Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.398251 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.398319 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.398423 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:19 crc kubenswrapper[4793]: E0130 13:44:19.398648 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.398441 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.398958 4793 scope.go:117] "RemoveContainer" containerID="ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f" Jan 30 13:44:19 crc kubenswrapper[4793]: E0130 13:44:19.398975 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:19 crc kubenswrapper[4793]: E0130 13:44:19.399149 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:19 crc kubenswrapper[4793]: E0130 13:44:19.399304 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.411914 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.412287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.412352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.412364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.412377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.412386 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.424308 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.435796 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.447520 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.460543 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.473839 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.484209 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.496781 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.506663 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.514163 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.514291 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.514361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.514428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.514493 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.516763 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.527309 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.539803 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.552552 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.564554 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.574692 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.585582 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.608034 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:08Z\\\",\\\"message\\\":\\\"found while processing openshift-etcd-operator/etcd-operator-b45778765-zrj8g: failed to check if pod openshift-etcd-operator/etcd-operator-b45778765-zrj8g is in primary UDN: could not find OVN pod annotation in map[]\\\\nI0130 13:44:08.535135 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-mnzcq: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-mnzcq is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI0130 13:44:08.535148 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-service-ca/service-ca-9c57cc56f-n9v6k: failed to check if pod openshift-service-ca/service-ca-9c57cc56f-n9v6k is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nE0130 13:44:08.602321 6172 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0130 13:44:08.603514 6172 ovnkube.go:599] Stopped ovnkube\\\\nI0130 13:44:08.603573 6172 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.616467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.616504 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.616515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.616530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.616541 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.718292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.718328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.718338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.718353 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.718362 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.816333 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/1.log" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.819486 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.820003 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.821359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.821398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.821425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.821439 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.821450 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.836502 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.854906 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.869803 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.890810 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.908399 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.921552 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.923281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.923314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.923323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.923339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.923348 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.946832 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.962367 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.984661 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.043333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.043363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.043371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.043384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.043394 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.063966 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:08Z\\\",\\\"message\\\":\\\"found while processing openshift-etcd-operator/etcd-operator-b45778765-zrj8g: failed to check if pod openshift-etcd-operator/etcd-operator-b45778765-zrj8g is in primary UDN: could not find OVN pod annotation in map[]\\\\nI0130 13:44:08.535135 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-mnzcq: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-mnzcq is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI0130 13:44:08.535148 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-service-ca/service-ca-9c57cc56f-n9v6k: failed to check if pod openshift-service-ca/service-ca-9c57cc56f-n9v6k is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nE0130 13:44:08.602321 6172 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0130 13:44:08.603514 6172 ovnkube.go:599] Stopped ovnkube\\\\nI0130 13:44:08.603573 6172 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.083421 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.093587 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.107538 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.117940 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.117966 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.117977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.117990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.118000 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.118024 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.127458 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.130112 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.130222 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.130251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.130258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.130269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.130278 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.141511 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.143322 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.145416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.145444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.145454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.145476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.145488 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.156638 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.157930 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.161366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.161386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.161394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.161406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.161414 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.172867 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.175345 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.175371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.175379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.175392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.175400 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.190243 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.190396 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.191627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.191655 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.191666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.191681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.191691 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.294263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.294300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.294312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.294329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.294341 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.345277 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:10:38.23478811 +0000 UTC Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.396902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.396941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.396952 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.396969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.396991 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.412790 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.423196 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.433371 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.447710 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.463999 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.475142 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.493283 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.500082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.500101 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.500109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.500121 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.500131 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.505181 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.516963 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.526878 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.539615 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.554391 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.567805 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.579484 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.588925 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.598996 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.602576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.602605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.602616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.602632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.602643 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.618034 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:08Z\\\",\\\"message\\\":\\\"found while processing openshift-etcd-operator/etcd-operator-b45778765-zrj8g: failed to check if pod openshift-etcd-operator/etcd-operator-b45778765-zrj8g is in primary UDN: could not find OVN pod annotation in map[]\\\\nI0130 13:44:08.535135 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-mnzcq: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-mnzcq is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI0130 13:44:08.535148 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-service-ca/service-ca-9c57cc56f-n9v6k: failed to check if pod openshift-service-ca/service-ca-9c57cc56f-n9v6k is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nE0130 13:44:08.602321 6172 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0130 13:44:08.603514 6172 ovnkube.go:599] Stopped ovnkube\\\\nI0130 13:44:08.603573 6172 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.705215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.705260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.705275 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.705295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.705306 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.808009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.808061 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.808071 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.808086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.808096 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.823982 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/2.log" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.824702 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/1.log" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.826936 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd" exitCode=1 Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.826990 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.827021 4793 scope.go:117] "RemoveContainer" containerID="ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.828008 4793 scope.go:117] "RemoveContainer" containerID="df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd" Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.828274 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.839365 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.849550 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.866716 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:08Z\\\",\\\"message\\\":\\\"found while processing openshift-etcd-operator/etcd-operator-b45778765-zrj8g: failed to check if pod openshift-etcd-operator/etcd-operator-b45778765-zrj8g is in primary UDN: could not find OVN pod annotation in map[]\\\\nI0130 13:44:08.535135 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-mnzcq: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-mnzcq is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI0130 13:44:08.535148 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-service-ca/service-ca-9c57cc56f-n9v6k: failed to check if pod openshift-service-ca/service-ca-9c57cc56f-n9v6k is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nE0130 13:44:08.602321 6172 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0130 13:44:08.603514 6172 ovnkube.go:599] Stopped ovnkube\\\\nI0130 13:44:08.603573 6172 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.880565 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.892338 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.904981 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.911738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.911779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.911789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.911803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.911813 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.918194 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.930647 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.945360 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.955099 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.965011 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.979440 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.990073 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.002323 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.012938 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.014098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.014148 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.014293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.014311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.014322 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.025848 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.037923 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.117106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.117443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.117553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.117656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.117750 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.222201 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.222559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.222805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.222978 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.223186 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.326109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.326142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.326152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.326165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.326178 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.345495 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:50:38.536178898 +0000 UTC Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.397495 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:21 crc kubenswrapper[4793]: E0130 13:44:21.397636 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.397846 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:21 crc kubenswrapper[4793]: E0130 13:44:21.397908 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.398099 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:21 crc kubenswrapper[4793]: E0130 13:44:21.398161 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.398296 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:21 crc kubenswrapper[4793]: E0130 13:44:21.398354 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.428466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.428739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.428824 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.428928 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.429027 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.530846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.530879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.530888 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.530903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.530915 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.634129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.634171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.634180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.634195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.634207 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.736616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.736680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.736689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.736703 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.736714 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.830697 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/2.log" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.833458 4793 scope.go:117] "RemoveContainer" containerID="df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd" Jan 30 13:44:21 crc kubenswrapper[4793]: E0130 13:44:21.833590 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.838227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.838247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.838255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.838265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.838273 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.846177 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.857357 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.868353 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.876321 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.885604 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.898387 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.910582 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.920898 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.933028 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.940371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.940401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.940412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.940428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.940437 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.944014 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.957236 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.972275 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.982529 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.998336 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.009848 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.019222 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.028708 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.042467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.042516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.042541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.042560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.042574 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.144645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.144697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.144709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.144726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.144737 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.246959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.246984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.246992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.247022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.247032 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.346145 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:26:09.483688963 +0000 UTC Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.348727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.348774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.348784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.348800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.348811 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.451419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.451451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.451463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.451476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.451485 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.554412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.554470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.554480 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.554495 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.554509 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.656842 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.656871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.656880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.656894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.656902 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.758999 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.759024 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.759035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.759062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.759073 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.837161 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/0.log" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.837191 4793 generic.go:334] "Generic (PLEG): container finished" podID="3e8d16db-eb58-4895-8c24-47d6f12b1ea4" containerID="9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812" exitCode=1 Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.837213 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerDied","Data":"9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.837499 4793 scope.go:117] "RemoveContainer" containerID="9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.851714 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.862515 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.864516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.864541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.864552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.864567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.864578 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.878656 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.889434 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.906879 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.921527 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.931792 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.941515 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.952529 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.965595 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.966562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.966584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.966592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.966605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.966614 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.978519 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.990791 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.002921 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.013379 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.023995 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.037877 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.053695 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.069151 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.069195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.069204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.069220 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.069229 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.171356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.171384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.171391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.171403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.171412 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.273454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.273483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.273491 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.273503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.273512 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.347307 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:18:44.306697568 +0000 UTC Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.376064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.376092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.376100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.376112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.376120 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.397855 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.397910 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:23 crc kubenswrapper[4793]: E0130 13:44:23.397956 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:23 crc kubenswrapper[4793]: E0130 13:44:23.398025 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.398103 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:23 crc kubenswrapper[4793]: E0130 13:44:23.398159 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.398214 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:23 crc kubenswrapper[4793]: E0130 13:44:23.398269 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.478118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.478145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.478153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.478165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.478173 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.580623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.580655 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.580663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.580676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.580685 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.683659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.683693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.683703 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.683717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.683726 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.790065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.790112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.790123 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.790138 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.790149 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.841357 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/0.log" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.841398 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerStarted","Data":"95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.856230 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.870631 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.883293 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.892776 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.892802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.892831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.892845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.892855 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.893484 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.904845 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.916505 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.929609 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.944027 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.957152 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.968235 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.979512 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.991129 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.994565 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.994591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.994602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.994618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.994628 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.999373 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.016476 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.028232 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.038550 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.048960 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.096916 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.097158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.097245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.097372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.097465 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.200260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.200299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.200311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.200326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.200339 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.303653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.303723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.303746 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.303774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.303795 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.347958 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:30:40.200319089 +0000 UTC Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.406996 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.407380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.407569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.407752 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.407906 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.511162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.511230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.511241 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.511254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.511262 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.613992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.614127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.614151 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.614177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.614195 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.717219 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.717278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.717292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.717313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.717326 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.819827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.819873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.819884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.819902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.819912 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.922840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.922897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.922906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.922922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.922933 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.025489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.025532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.025545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.025561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.025573 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.128671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.128705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.128718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.128736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.128748 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.231932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.231970 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.231981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.231997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.232010 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.334281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.334336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.334346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.334365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.334376 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.348703 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:51:39.846515397 +0000 UTC Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.398194 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.398296 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:25 crc kubenswrapper[4793]: E0130 13:44:25.398497 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.398507 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.398572 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:25 crc kubenswrapper[4793]: E0130 13:44:25.398641 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:25 crc kubenswrapper[4793]: E0130 13:44:25.398821 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:25 crc kubenswrapper[4793]: E0130 13:44:25.398946 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.437497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.437551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.437568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.437590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.437608 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.541018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.541107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.541129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.541157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.541180 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.645136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.645174 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.645184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.645198 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.645208 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.748037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.748083 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.748091 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.748105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.748115 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.851203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.851256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.851272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.851292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.851306 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.954424 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.954475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.954492 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.954514 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.954530 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.057331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.057398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.057416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.057439 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.057458 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.159767 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.159808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.159817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.159830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.159839 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.262008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.262074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.262087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.262105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.262116 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.349668 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:34:03.484926221 +0000 UTC Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.364918 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.364956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.364965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.364982 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.364993 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.467378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.467429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.467446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.467467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.467484 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.571470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.571531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.571548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.571571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.571588 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.674555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.674599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.674636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.674655 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.674666 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.777728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.777763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.777770 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.777784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.777794 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.880334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.880370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.880380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.880393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.880404 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.983361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.983396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.983444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.983460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.983470 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.086508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.086569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.086579 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.086594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.086606 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.189949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.190004 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.190020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.190043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.190100 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.292002 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.292034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.292079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.292111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.292130 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.350274 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:32:22.174782172 +0000 UTC Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.394157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.394203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.394215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.394234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.394245 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.397411 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.397468 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.397487 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.397432 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:27 crc kubenswrapper[4793]: E0130 13:44:27.397572 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:27 crc kubenswrapper[4793]: E0130 13:44:27.397699 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:27 crc kubenswrapper[4793]: E0130 13:44:27.397875 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:27 crc kubenswrapper[4793]: E0130 13:44:27.397980 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.497475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.497599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.497619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.497642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.497660 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.600107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.600142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.600150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.600165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.600174 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.702896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.702938 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.702948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.702961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.702970 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.805900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.805957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.805969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.805986 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.805998 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.908942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.909527 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.909605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.909691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.909770 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.012595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.012646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.012662 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.012682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.012694 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.115622 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.115685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.115702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.115726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.115744 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.218937 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.219035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.219084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.219112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.219129 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.322623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.322666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.322678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.322712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.322725 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.351122 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:30:29.903666294 +0000 UTC Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.425765 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.425887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.425913 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.425947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.426011 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.534559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.534646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.534665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.534720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.534740 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.638343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.638426 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.638443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.638469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.638488 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.741438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.741488 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.741504 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.741530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.741547 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.844872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.844964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.844991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.845021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.845129 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.948925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.949012 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.949037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.949136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.949163 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.052114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.052165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.052181 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.052200 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.052217 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.155711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.155787 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.155806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.155830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.155848 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.258804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.258919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.258988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.259025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.259077 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.351927 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:38:36.731906104 +0000 UTC Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.362595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.362649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.362666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.362688 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.362704 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.397499 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.397548 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.397553 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.397525 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:29 crc kubenswrapper[4793]: E0130 13:44:29.397666 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:29 crc kubenswrapper[4793]: E0130 13:44:29.397786 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:29 crc kubenswrapper[4793]: E0130 13:44:29.397840 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:29 crc kubenswrapper[4793]: E0130 13:44:29.397937 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.465481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.465539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.465575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.465604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.465626 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.569003 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.569125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.569139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.569157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.569169 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.671539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.671579 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.671590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.671604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.671614 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.773849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.773919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.773936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.773959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.773974 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.876717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.876760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.876803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.876826 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.876837 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.979221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.979299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.979324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.979354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.979377 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.082232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.082266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.082274 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.082287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.082297 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.185538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.185579 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.185590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.185606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.185617 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.260848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.260922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.260945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.260972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.260993 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: E0130 13:44:30.280946 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.286827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.286893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.286916 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.286990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.287012 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: E0130 13:44:30.313482 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.319218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.319298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.319321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.319355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.319380 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: E0130 13:44:30.335484 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.342822 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.342908 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.342926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.342947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.342963 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.352483 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 13:33:27.324818466 +0000 UTC Jan 30 13:44:30 crc kubenswrapper[4793]: E0130 13:44:30.361542 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.366531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.366590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.366606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.366890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.366928 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: E0130 13:44:30.381892 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: E0130 13:44:30.382205 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.383841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.383867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.383875 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.383887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.383896 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.415584 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.426039 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.435507 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.445277 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.456006 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.467014 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.481212 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.489660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.489723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.489736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.489749 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.489757 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.494013 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.506018 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.524128 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.535331 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.546980 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.557764 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.567428 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.583149 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.592009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.592137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.592152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.592168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.592179 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.593778 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.611069 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.694624 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.694658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.694675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.694689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.694698 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.797535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.797823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.797832 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.797846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.797855 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.900557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.900593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.900603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.900618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.900630 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.010694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.010740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.010752 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.010767 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.010780 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.113583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.113628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.113641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.113656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.113668 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.217341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.217429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.217459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.217489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.217509 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.320452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.320519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.320537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.320560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.320577 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.353150 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:28:27.130567791 +0000 UTC Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.397773 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.397814 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.397773 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:31 crc kubenswrapper[4793]: E0130 13:44:31.397887 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:31 crc kubenswrapper[4793]: E0130 13:44:31.397961 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.397998 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:31 crc kubenswrapper[4793]: E0130 13:44:31.398076 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:31 crc kubenswrapper[4793]: E0130 13:44:31.398132 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.422876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.422954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.422968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.422981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.422990 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.527596 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.527658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.527668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.527682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.527737 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.631187 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.631252 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.631272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.631301 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.631323 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.733816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.733864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.733876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.733896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.733908 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.836738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.836817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.836852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.836879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.836898 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.939877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.939934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.939951 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.939975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.939992 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.042879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.042933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.042950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.042972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.042989 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.146397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.146472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.146496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.146526 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.146547 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.249720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.249876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.249905 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.249936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.249956 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.353380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.353444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.353466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.353499 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.353521 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.354382 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:57:05.701883648 +0000 UTC Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.456348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.456420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.456444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.456470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.456488 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.559225 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.559269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.559280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.559295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.559309 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.661699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.661769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.661792 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.661823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.661847 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.765567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.765642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.765666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.765695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.765722 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.868980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.869079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.869106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.869136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.869159 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.971452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.971497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.971508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.971523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.971537 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.074259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.074295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.074309 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.074324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.074335 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.177340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.177380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.177389 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.177402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.177412 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.279902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.279989 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.280005 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.280034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.280092 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.354981 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:53:09.776495734 +0000 UTC Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.382811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.382846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.382857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.382872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.382905 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.397465 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.397553 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.397576 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:33 crc kubenswrapper[4793]: E0130 13:44:33.397650 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:33 crc kubenswrapper[4793]: E0130 13:44:33.397827 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.397973 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:33 crc kubenswrapper[4793]: E0130 13:44:33.398109 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:33 crc kubenswrapper[4793]: E0130 13:44:33.398261 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.485148 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.485197 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.485212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.485233 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.485250 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.588312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.588367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.588376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.588392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.588401 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.691283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.691325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.691359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.691376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.691387 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.794572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.794617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.794626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.794642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.794652 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.904897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.905166 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.905247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.905318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.905374 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.009167 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.009206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.009216 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.009232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.009244 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.111954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.112000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.112032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.112080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.112097 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.214629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.215022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.215229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.215385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.215530 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.319087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.319451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.319578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.319700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.319841 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.356232 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:30:54.347948661 +0000 UTC Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.423493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.423553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.423572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.423594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.423611 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.529836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.530086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.530170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.530285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.530385 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.633507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.633569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.633585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.633609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.633627 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.737230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.737570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.737678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.737780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.737878 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.841504 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.841539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.841547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.841561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.841572 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.945259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.945341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.945360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.945388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.945407 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.049666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.049711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.049720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.049735 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.049744 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.152258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.152575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.152800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.152991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.153310 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.256253 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.256312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.256330 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.256356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.256373 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.356910 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:25:02.017465021 +0000 UTC Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.358390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.358919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.359176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.359381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.359590 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.397854 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.398173 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.398039 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:35 crc kubenswrapper[4793]: E0130 13:44:35.398196 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.397879 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:35 crc kubenswrapper[4793]: E0130 13:44:35.398806 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:35 crc kubenswrapper[4793]: E0130 13:44:35.398998 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:35 crc kubenswrapper[4793]: E0130 13:44:35.399129 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.399374 4793 scope.go:117] "RemoveContainer" containerID="df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd" Jan 30 13:44:35 crc kubenswrapper[4793]: E0130 13:44:35.399618 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.462460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.462756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.462871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.462974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.463123 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.565215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.565283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.565293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.565306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.565315 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.667492 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.667533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.667545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.667561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.667573 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.770569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.770636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.770658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.770686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.770708 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.873389 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.873442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.873462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.873490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.873512 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.976435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.976489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.976510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.976539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.976563 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.079392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.079471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.079483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.079504 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.079518 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.182742 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.182883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.182903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.182942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.182954 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.285452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.285487 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.285498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.285538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.285548 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.357810 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:19:42.575093871 +0000 UTC Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.388171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.388206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.388214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.388228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.388241 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.490368 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.490650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.490741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.490839 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.490913 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.593317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.593390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.593402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.593417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.593427 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.696242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.696273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.696289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.696304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.696314 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.798605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.798656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.798666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.798682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.798694 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.901469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.901505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.901519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.901537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.901551 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.004662 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.004725 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.004737 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.004754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.004765 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.107130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.107167 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.107177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.107191 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.107201 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.210868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.210912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.210924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.210942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.210955 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.251486 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.251612 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.251675 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.251703 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.251806 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.251854 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:45:41.251840351 +0000 UTC m=+151.953188842 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252152 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252173 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252184 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252201 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252219 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:45:41.252209782 +0000 UTC m=+151.953558263 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252260 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:45:41.252244792 +0000 UTC m=+151.953593293 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252391 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:41.252383756 +0000 UTC m=+151.953732247 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.313659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.313699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.313708 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.313722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.313732 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.352995 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.353185 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.353200 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.353212 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.353254 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:45:41.353240493 +0000 UTC m=+152.054588994 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.358965 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:46:42.761422588 +0000 UTC Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.397218 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.397273 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.397355 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.397356 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.397233 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.397462 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.397521 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.397562 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.415346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.415420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.415436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.415456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.415469 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.517768 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.517818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.517831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.517851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.517863 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.620137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.620179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.620190 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.620205 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.620216 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.722541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.722585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.722596 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.722612 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.722623 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.825020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.825070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.825080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.825092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.825101 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.927278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.927326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.927336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.927348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.927356 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.029724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.029753 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.029761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.029774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.029784 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.132573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.132606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.132615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.132632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.132644 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.235819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.235974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.236078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.236239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.236415 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.338289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.338343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.338359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.338382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.338399 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.359280 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 13:02:37.693562917 +0000 UTC Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.441615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.441658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.441671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.441688 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.441700 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.544191 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.544641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.544718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.544802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.544860 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.646780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.646834 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.646846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.646863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.646874 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.749299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.749598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.749667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.749737 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.749806 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.851914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.852238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.852320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.852397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.852462 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.955583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.955637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.955646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.955659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.955672 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.058027 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.058081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.058090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.058104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.058113 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.159785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.159832 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.159846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.159861 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.159872 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.262358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.262401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.262409 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.262422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.262430 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.359647 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:49:04.2078353 +0000 UTC Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.364404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.364471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.364484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.364503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.364515 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.397980 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.398229 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.398263 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:39 crc kubenswrapper[4793]: E0130 13:44:39.398502 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.398543 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:39 crc kubenswrapper[4793]: E0130 13:44:39.398715 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:39 crc kubenswrapper[4793]: E0130 13:44:39.398695 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:39 crc kubenswrapper[4793]: E0130 13:44:39.398794 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.467344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.467382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.467394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.467411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.467424 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.569744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.569785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.569796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.569811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.569822 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.672674 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.672713 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.672722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.672735 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.672743 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.774619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.774649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.774659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.774675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.774687 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.877004 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.877255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.877336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.877404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.877465 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.978802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.978838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.978846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.978858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.978867 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.080413 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.080455 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.080466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.080481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.080492 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.183040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.183578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.183685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.183757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.183812 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.286612 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.286942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.287090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.287203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.287326 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.360165 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 09:18:19.004695018 +0000 UTC Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.390259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.390533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.390640 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.390750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.390843 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.420326 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.434822 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.445525 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.460577 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.474334 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.492862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.492899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.492909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.492925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.492937 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.496309 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.509494 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.522023 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.535085 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.546115 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.558307 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.568771 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.579734 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.591794 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.596328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.596356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.596382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.596395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.596406 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.604840 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.615097 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.626442 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.660983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.661017 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.661024 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.661037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.661061 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: E0130 13:44:40.672846 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.676453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.676492 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.676501 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.676516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.676527 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: E0130 13:44:40.692884 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.697351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.697397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.697405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.697419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.697429 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: E0130 13:44:40.709263 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.713185 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.713221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.713231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.713245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.713255 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: E0130 13:44:40.727639 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.731088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.731125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.731139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.731155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.731166 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: E0130 13:44:40.744337 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: E0130 13:44:40.744452 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.746600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.746631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.746639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.746653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.746663 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.848809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.849167 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.849271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.849367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.849448 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.952428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.952460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.952471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.952490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.952507 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.055823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.056152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.056243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.056319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.056385 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.159406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.159706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.159774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.159837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.159933 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.262595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.262626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.262634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.262649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.262667 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.360772 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:28:43.726225941 +0000 UTC Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.364922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.364946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.364953 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.364966 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.364975 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.398196 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:41 crc kubenswrapper[4793]: E0130 13:44:41.398313 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.398506 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:41 crc kubenswrapper[4793]: E0130 13:44:41.398569 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.398709 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:41 crc kubenswrapper[4793]: E0130 13:44:41.398781 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.399100 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:41 crc kubenswrapper[4793]: E0130 13:44:41.399263 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.408263 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.467416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.467697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.467785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.467898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.467997 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.570554 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.570793 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.570872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.570991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.571084 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.673194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.673461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.673540 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.673618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.673690 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.776081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.776406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.776485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.776566 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.776646 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.878654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.878907 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.878974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.879071 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.879138 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.982096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.982143 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.982155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.982172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.982184 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.085857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.085912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.085925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.085943 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.085955 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.188700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.188736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.188745 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.188758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.188767 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.291380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.291924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.292010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.292144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.292248 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.361356 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:22:11.201034361 +0000 UTC Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.394541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.394585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.394595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.394609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.394618 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.497544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.497586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.497598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.497614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.497626 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.600467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.600783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.600867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.600948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.601037 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.704412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.704586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.704603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.704626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.704638 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.806718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.806756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.806766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.806781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.806792 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.908960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.909281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.909366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.909502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.909598 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.012400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.012699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.012774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.012860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.013073 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.115711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.115777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.115814 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.115843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.115863 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.218214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.218268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.218284 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.218304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.218320 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.321123 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.321160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.321171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.321184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.321193 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.362226 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:27:45.388084496 +0000 UTC Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.397518 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.397556 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.397672 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:43 crc kubenswrapper[4793]: E0130 13:44:43.397757 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.397787 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:43 crc kubenswrapper[4793]: E0130 13:44:43.397972 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:43 crc kubenswrapper[4793]: E0130 13:44:43.397992 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:43 crc kubenswrapper[4793]: E0130 13:44:43.398030 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.423851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.423885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.423895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.423911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.423921 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.527072 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.527106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.527115 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.527129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.527138 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.629671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.629949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.630063 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.630173 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.630265 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.732168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.732549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.732649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.732751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.732849 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.836336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.836384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.836395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.836415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.836426 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.939617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.939656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.939665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.939680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.939697 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.042390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.042444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.042467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.042487 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.042504 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.145739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.145835 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.145862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.145897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.145920 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.249398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.249693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.249790 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.249883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.249960 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.352608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.352652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.352663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.352681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.352694 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.362597 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:39:33.411690165 +0000 UTC Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.455365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.455437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.455461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.455499 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.455524 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.558720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.558785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.558806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.558837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.558861 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.660951 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.661230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.661311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.661388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.661474 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.763340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.763395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.763411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.763435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.763454 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.866541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.866572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.866581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.866613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.866627 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.968818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.968854 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.968867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.968883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.968893 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.071618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.071660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.071670 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.071687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.071702 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.175271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.175387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.175408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.175431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.175447 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.278000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.278062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.278073 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.278093 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.278104 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.362990 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:19:48.929972676 +0000 UTC Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.380841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.381224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.381361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.381497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.381624 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.397970 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.398279 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:45 crc kubenswrapper[4793]: E0130 13:44:45.398493 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.398524 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:45 crc kubenswrapper[4793]: E0130 13:44:45.398734 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:45 crc kubenswrapper[4793]: E0130 13:44:45.398658 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.398576 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:45 crc kubenswrapper[4793]: E0130 13:44:45.399033 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.484771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.485089 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.485196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.485309 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.485415 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.587768 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.587800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.587809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.587821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.587830 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.692610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.692644 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.692654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.692670 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.692681 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.795805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.795863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.795884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.795906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.795924 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.899160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.899231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.899249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.899319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.899349 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.001905 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.001948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.001960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.001979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.001996 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.104154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.104195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.104205 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.104220 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.104232 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.207764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.207798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.207807 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.207819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.207829 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.310829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.310939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.310954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.311301 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.311539 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.363841 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:57:34.270092418 +0000 UTC Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.414014 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.414089 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.414099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.414131 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.414144 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.516845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.516898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.516910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.516925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.516936 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.620810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.620852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.620861 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.620877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.620886 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.723163 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.723203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.723212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.723228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.723243 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.826188 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.826224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.826242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.826261 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.826274 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.929001 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.929363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.929441 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.929523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.929624 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.032297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.032559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.032809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.032903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.032996 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.135896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.135965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.135985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.136010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.136028 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.238732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.238777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.238785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.238800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.238810 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.340934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.341173 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.341267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.341342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.341416 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.364793 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:33:52.893259482 +0000 UTC Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.397334 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.397395 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.397407 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.397334 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:47 crc kubenswrapper[4793]: E0130 13:44:47.397551 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:47 crc kubenswrapper[4793]: E0130 13:44:47.397462 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:47 crc kubenswrapper[4793]: E0130 13:44:47.397685 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:47 crc kubenswrapper[4793]: E0130 13:44:47.397777 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.409971 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.445712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.446032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.446230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.446431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.446641 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.549351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.549629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.549734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.549846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.549949 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.652524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.652567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.652577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.652593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.652608 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.755126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.755169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.755178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.755213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.755223 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.858269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.858321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.858332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.858352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.858371 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.960885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.960958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.960981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.961009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.961031 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.063975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.064023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.064038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.064112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.064142 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.167030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.167109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.167121 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.167139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.167151 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.270595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.270634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.270643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.270655 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.270664 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.366097 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:14:55.724393446 +0000 UTC Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.373074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.373236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.373323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.373415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.373516 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.475462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.475734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.475806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.475893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.475999 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.578934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.579031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.579097 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.579130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.579152 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.681664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.681999 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.682380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.682699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.683043 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.787141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.787235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.787257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.787286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.787309 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.889990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.890264 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.890329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.890398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.890477 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.993232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.993278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.993290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.993305 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.993318 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.096163 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.096212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.096224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.096256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.096269 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.200038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.200538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.200650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.200745 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.200824 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.303730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.303987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.304105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.304198 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.304269 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.367243 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 01:01:57.386903407 +0000 UTC Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.397180 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:49 crc kubenswrapper[4793]: E0130 13:44:49.397310 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.397467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:49 crc kubenswrapper[4793]: E0130 13:44:49.397527 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.397628 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:49 crc kubenswrapper[4793]: E0130 13:44:49.397668 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.397761 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:49 crc kubenswrapper[4793]: E0130 13:44:49.397817 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.407139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.407514 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.407897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.408244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.408551 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.511524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.511557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.511573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.511588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.511598 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.613748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.613802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.613812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.613824 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.613833 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.716525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.716857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.717015 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.717223 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.717370 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.819950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.820030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.820077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.820101 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.820118 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.922340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.922381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.922390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.922406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.922416 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.978818 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:49 crc kubenswrapper[4793]: E0130 13:44:49.979041 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:44:49 crc kubenswrapper[4793]: E0130 13:44:49.979149 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:45:53.979130324 +0000 UTC m=+164.680478825 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.024744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.024816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.024840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.024868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.024894 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.128845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.128931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.128944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.128958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.128967 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.232324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.232377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.232396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.232419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.232435 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.334976 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.335037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.335068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.335090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.335102 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.367400 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:02:26.89624168 +0000 UTC Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.398846 4793 scope.go:117] "RemoveContainer" containerID="df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.420245 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.432547 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.437633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.437667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.437677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.437694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.437703 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.459632 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.477314 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.488494 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.503239 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.515425 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.526168 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.538584 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.539758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.539993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.540004 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.540018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.540029 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.551393 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.564799 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.581773 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.593793 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78dd92c-34bb-4606-952d-7d1323e4ecd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138ad071341d45922e6b30ca8d58f26e60c6ab9f407f70fd3b7a61bd7cef446d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.607385 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.619154 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.631610 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.642273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.642311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.642323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.642341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.642352 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.645937 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.661801 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71f93fe1-7dd7-4557-91d9-63e829052686\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31716686e8eff95a71aca86f4d29b9f0a7e5aed74428b1bceb266273a571fa3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cbec632a964cfe1b95a67579e0f8be3bffe1af19e50940cca4f04b1397d8fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a315e5a682045e2d27391e25293e5427a27df424debb83fc338515a48ef4ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927e5087e2d7755f5eda8cac47915d186b89d2be6b19dac4c5246e1b14f5df13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6dcda3f2706461a36af85ad53e425262bfc3c0ecc47d37b8cb69d908830645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.671271 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.746103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.746159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.746178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.746204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.746221 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.848861 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.848903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.848917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.848935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.848958 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.923511 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/2.log" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.926123 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.926669 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.938671 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.952100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.952249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.952319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.952533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.952634 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.956090 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.968281 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.981948 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.994003 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.003728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.004344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.004471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.004575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.004665 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.012667 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.019232 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.023986 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.024026 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.024038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.024070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.024083 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.030925 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.042233 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.046547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.046636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.046669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.046701 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.046725 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.047907 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78dd92c-34bb-4606-952d-7d1323e4ecd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138ad071341d45922e6b30ca8d58f26e60c6ab9f407f70fd3b7a61bd7cef446d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.058227 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.062994 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.066480 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.066547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.066569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.066960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.067690 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.071733 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.083827 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.091932 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.095537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.095586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.095602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.095625 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.095639 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.103140 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.109412 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.109740 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.111321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.111364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.111378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.111398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.111415 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.122328 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71f93fe1-7dd7-4557-91d9-63e829052686\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31716686e8eff95a71aca86f4d29b9f0a7e5aed74428b1bceb266273a571fa3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cbec632a964cfe1b95a67579e0f8be3bffe1af19e50940cca4f04b1397d8fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a315e5a682045e2d27391e25293e5427a27df424debb83fc338515a48ef4ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927e5087e2d7755f5eda8cac47915d186b89d2be6b19dac4c5246e1b14f5df13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6dcda3f2706461a36af85ad53e425262bfc3c0ecc47d37b8cb69d908830645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.139853 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.151723 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.170628 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.197904 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.212591 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.213946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.213981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.213991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.214006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.214032 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.228547 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.317529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.317609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.317619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.317641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.317654 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.368878 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:06:49.616275053 +0000 UTC Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.398478 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.398773 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.398681 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.399018 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.398699 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.399248 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.398645 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.399458 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.419874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.420265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.420546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.420720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.420854 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.522694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.522926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.522992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.523091 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.523171 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.625638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.625674 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.625682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.625696 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.625705 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.728958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.729016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.729033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.729084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.729105 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.832213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.832555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.832651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.832758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.832874 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.931376 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/3.log" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.931902 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/2.log" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.934214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.934248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.934257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.934271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.934280 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.935228 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" exitCode=1 Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.935263 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.935296 4793 scope.go:117] "RemoveContainer" containerID="df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.935878 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.936036 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.956953 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.980462 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"tor-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-rdsch openshift-multus/multus-additional-cni-plugins-nsxfs openshift-multus/network-metrics-daemon-xfcvw openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI0130 13:44:51.565428 6932 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 13:44:51.565439 6932 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565447 6932 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565453 6932 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0130 13:44:51.565457 6932 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0130 13:44:51.565461 6932 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565475 6932 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:44:51.565545 6932 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.994311 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.005376 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.015367 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.027005 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.043209 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.043464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.043549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.043634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.043711 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.050296 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.069741 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.088828 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.102647 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.118387 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.128455 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78dd92c-34bb-4606-952d-7d1323e4ecd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138ad071341d45922e6b30ca8d58f26e60c6ab9f407f70fd3b7a61bd7cef446d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.140437 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.146381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.146406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.146416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.146429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.146440 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.152184 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.161899 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.172933 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.190656 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71f93fe1-7dd7-4557-91d9-63e829052686\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31716686e8eff95a71aca86f4d29b9f0a7e5aed74428b1bceb266273a571fa3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cbec632a964cfe1b95a67579e0f8be3bffe1af19e50940cca4f04b1397d8fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a315e5a682045e2d27391e25293e5427a27df424debb83fc338515a48ef4ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927e5087e2d7755f5eda8cac47915d186b89d2be6b19dac4c5246e1b14f5df13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6dcda3f2706461a36af85ad53e425262bfc3c0ecc47d37b8cb69d908830645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.202713 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.218192 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.249193 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.249236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.249246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.249261 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.249271 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.352345 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.352402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.352411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.352665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.352685 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.370674 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:13:43.467587815 +0000 UTC Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.455324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.455360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.455370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.455387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.455398 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.558533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.558587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.558605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.558629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.558647 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.660749 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.661098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.661262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.661393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.661531 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.763704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.763743 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.763759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.763775 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.763788 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.866414 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.866456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.866466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.866478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.866488 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.941317 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/3.log" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.944721 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:44:52 crc kubenswrapper[4793]: E0130 13:44:52.944859 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.959987 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.969189 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.969241 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.969249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.969263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.969273 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.972286 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.984130 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.006018 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71f93fe1-7dd7-4557-91d9-63e829052686\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31716686e8eff95a71aca86f4d29b9f0a7e5aed74428b1bceb266273a571fa3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cbec632a964cfe1b95a67579e0f8be3bffe1af19e50940cca4f04b1397d8fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a315e5a682045e2d27391e25293e5427a27df424debb83fc338515a48ef4ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927e5087e2d7755f5eda8cac47915d186b89d2be6b19dac4c5246e1b14f5df13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6dcda3f2706461a36af85ad53e425262bfc3c0ecc47d37b8cb69d908830645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.017469 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.030000 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.049174 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"tor-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-rdsch openshift-multus/multus-additional-cni-plugins-nsxfs openshift-multus/network-metrics-daemon-xfcvw openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI0130 13:44:51.565428 6932 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 13:44:51.565439 6932 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565447 6932 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565453 6932 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0130 13:44:51.565457 6932 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0130 13:44:51.565461 6932 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565475 6932 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:44:51.565545 6932 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.061832 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.071751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.071796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.071809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.071822 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.071830 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.074935 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.084472 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.098074 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.115466 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.128334 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.141563 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.152759 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.164807 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.174246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.174320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.174332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.174347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.174359 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.175559 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78dd92c-34bb-4606-952d-7d1323e4ecd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138ad071341d45922e6b30ca8d58f26e60c6ab9f407f70fd3b7a61bd7cef446d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.188214 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.207707 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.276868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.276910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.276919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.276933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.276943 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.371651 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:01:46.117660695 +0000 UTC Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.379714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.379764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.379776 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.379793 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.379805 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.398144 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.398192 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:53 crc kubenswrapper[4793]: E0130 13:44:53.398301 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.398510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.398620 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:53 crc kubenswrapper[4793]: E0130 13:44:53.398758 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:53 crc kubenswrapper[4793]: E0130 13:44:53.398812 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:53 crc kubenswrapper[4793]: E0130 13:44:53.398869 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.482712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.482743 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.482751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.482764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.482773 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.586281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.586316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.586328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.586343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.586356 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.688726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.688760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.688769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.688783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.688792 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.792022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.792445 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.792530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.792617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.792711 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.895116 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.895380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.895464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.895553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.895633 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.997687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.997718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.997727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.997743 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.997755 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.099508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.099836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.099932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.100020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.100141 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.203143 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.203422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.203507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.203646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.203742 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.306128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.306223 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.306259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.306289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.306310 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.372743 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:52:28.596418077 +0000 UTC Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.409206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.409540 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.409748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.409950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.410173 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.513753 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.513817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.513839 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.513867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.513888 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.616660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.617036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.617216 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.617382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.617523 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.723107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.723169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.723177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.723208 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.723222 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.826589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.826622 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.826629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.826642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.826653 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.929295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.929340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.929352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.929368 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.929380 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.031936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.031993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.032013 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.032037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.032113 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.135485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.135541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.135552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.135566 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.135576 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.238870 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.238924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.238939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.238962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.238976 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.341769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.341827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.341836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.341851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.341864 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.373246 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:04:13.232956527 +0000 UTC Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.397583 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.397641 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.397654 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:55 crc kubenswrapper[4793]: E0130 13:44:55.397761 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.397833 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:55 crc kubenswrapper[4793]: E0130 13:44:55.397945 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:55 crc kubenswrapper[4793]: E0130 13:44:55.398009 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:55 crc kubenswrapper[4793]: E0130 13:44:55.398028 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.444587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.444788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.444813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.444836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.444850 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.547805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.548079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.548228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.548346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.548528 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.651931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.652021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.652043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.652108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.652129 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.755028 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.755289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.755318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.755347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.755371 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.858265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.858508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.858620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.858695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.858782 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.960495 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.960533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.960544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.960593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.960607 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.063442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.063484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.063493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.063509 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.063519 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.166448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.166499 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.166511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.166526 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.166536 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.268847 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.268893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.268903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.268919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.268930 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.372169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.372251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.372271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.372294 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.372312 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.374156 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:59:08.193932394 +0000 UTC Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.475634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.475686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.475703 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.475725 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.475740 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.577850 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.577912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.577932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.577959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.577980 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.680322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.680384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.680406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.680443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.680465 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.783136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.783180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.783196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.783218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.783234 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.886398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.886516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.886534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.886594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.886617 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.989511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.989570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.989586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.989608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.989625 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.092522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.092634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.092658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.092689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.092707 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.196217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.196245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.196254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.196267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.196277 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.298845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.298885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.298894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.298909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.298919 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.374297 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:02:30.644904972 +0000 UTC Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.397211 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.397273 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:57 crc kubenswrapper[4793]: E0130 13:44:57.397323 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:57 crc kubenswrapper[4793]: E0130 13:44:57.397441 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.397534 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:57 crc kubenswrapper[4793]: E0130 13:44:57.397652 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.397871 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:57 crc kubenswrapper[4793]: E0130 13:44:57.397976 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.400895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.401007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.401137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.401226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.401309 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.503797 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.504202 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.504377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.504603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.504794 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.608859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.609359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.609583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.609756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.609907 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.713066 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.713107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.713116 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.713132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.713141 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.815985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.816039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.816072 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.816103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.816115 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.918588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.918616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.918641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.918656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.918663 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.020771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.020813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.020827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.020841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.020851 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.122754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.122822 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.122844 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.122864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.122879 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.225917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.225974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.225995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.226025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.226076 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.329023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.329126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.329138 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.329154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.329190 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.374735 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:16:41.391506018 +0000 UTC Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.431298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.431338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.431348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.431366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.431377 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.534370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.534418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.534428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.534449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.534461 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.637179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.637469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.637558 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.637658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.637757 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.740361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.740407 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.740417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.740432 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.740442 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.843808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.843862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.843873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.843891 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.843902 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.946016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.946062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.946070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.946083 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.946107 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.048997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.049080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.049096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.049124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.049141 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.152683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.152975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.153065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.153150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.153251 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.256162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.256481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.256556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.256637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.256712 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.359386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.359429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.359440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.359461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.359471 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.375880 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:52:41.551327016 +0000 UTC Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.397694 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:59 crc kubenswrapper[4793]: E0130 13:44:59.398116 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.397782 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:59 crc kubenswrapper[4793]: E0130 13:44:59.398570 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.397725 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:59 crc kubenswrapper[4793]: E0130 13:44:59.399307 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.397845 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:59 crc kubenswrapper[4793]: E0130 13:44:59.399649 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.462224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.462311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.462348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.462379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.462401 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.564683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.564969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.565265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.565555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.565882 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.669745 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.669818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.669840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.669867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.669889 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.772464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.772520 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.772529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.772543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.772551 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.875463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.875519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.875533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.875552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.875564 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.978525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.978595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.978619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.978652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.978675 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.081988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.082117 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.082145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.082176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.082241 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.185250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.185577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.185661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.185740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.185812 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.289618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.289665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.289680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.289702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.289717 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.376335 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:39:27.305695908 +0000 UTC Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.392245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.392549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.392664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.392758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.392844 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.415202 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.439233 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71f93fe1-7dd7-4557-91d9-63e829052686\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31716686e8eff95a71aca86f4d29b9f0a7e5aed74428b1bceb266273a571fa3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cbec632a964cfe1b95a67579e0f8be3bffe1af19e50940cca4f04b1397d8fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a315e5a682045e2d27391e25293e5427a27df424debb83fc338515a48ef4ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927e5087e2d7755f5eda8cac47915d186b89d2be6b19dac4c5246e1b14f5df13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6dcda3f2706461a36af85ad53e425262bfc3c0ecc47d37b8cb69d908830645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.453644 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.465191 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.477648 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.494798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.494841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.494850 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.494863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.494880 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.504998 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"tor-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-rdsch openshift-multus/multus-additional-cni-plugins-nsxfs openshift-multus/network-metrics-daemon-xfcvw openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI0130 13:44:51.565428 6932 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 13:44:51.565439 6932 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565447 6932 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565453 6932 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0130 13:44:51.565457 6932 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0130 13:44:51.565461 6932 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565475 6932 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:44:51.565545 6932 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.517782 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.527324 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.537684 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.549342 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.562065 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.574027 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.584298 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.596415 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.597258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.597321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.597335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.597354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.597365 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.610016 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.622226 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78dd92c-34bb-4606-952d-7d1323e4ecd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138ad071341d45922e6b30ca8d58f26e60c6ab9f407f70fd3b7a61bd7cef446d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.635901 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.648608 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.662903 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.700272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.700336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.700354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.700376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.700391 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.803118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.803160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.803177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.803194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.803206 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.905484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.905517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.905528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.905542 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.905551 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.007939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.007973 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.007981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.007997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.008007 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.110114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.110175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.110186 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.110204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.110214 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.213914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.214366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.214990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.215120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.215255 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.227990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.228241 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.228449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.228777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.228933 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.241339 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.244469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.244598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.244694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.244784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.244895 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.257293 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.261323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.261691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.261816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.261935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.262030 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.275948 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.280152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.280361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.280460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.280584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.280670 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.293363 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.296896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.296931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.296941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.296956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.296967 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.310117 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.310259 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.318221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.318246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.318255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.318268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.318277 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.377256 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:12:22.463492661 +0000 UTC Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.397472 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.397613 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.397802 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.397861 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.397981 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.398032 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.398153 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.398199 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.421251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.421295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.421319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.421340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.421355 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.533004 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.533068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.533079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.533099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.533109 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.635914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.635972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.635983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.635996 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.636007 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.737954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.737994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.738006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.738019 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.738028 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.840461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.840494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.840504 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.840518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.840528 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.942412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.942467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.942482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.942502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.942516 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.046519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.046567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.046586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.046610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.046628 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.148738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.149029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.149155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.149230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.149307 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.251330 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.251363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.251374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.251387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.251397 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.353959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.354028 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.354075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.354098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.354109 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.377947 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:41:18.854577032 +0000 UTC Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.456507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.456545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.456557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.456574 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.456587 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.559192 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.559222 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.559230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.559243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.559252 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.661553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.661615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.661632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.661654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.661669 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.763934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.764024 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.764304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.764344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.764366 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.867090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.867135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.867195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.867214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.867225 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.969693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.969723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.969731 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.969754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.969763 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.071950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.072318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.072405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.072506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.072604 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.174667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.174707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.174717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.174732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.174742 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.277526 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.277954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.278085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.278183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.278244 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.378737 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:14:34.644429297 +0000 UTC Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.380674 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.380708 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.380716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.380730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.380739 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.397902 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.397932 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:03 crc kubenswrapper[4793]: E0130 13:45:03.398342 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.397985 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.397965 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:03 crc kubenswrapper[4793]: E0130 13:45:03.398418 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:03 crc kubenswrapper[4793]: E0130 13:45:03.398275 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:03 crc kubenswrapper[4793]: E0130 13:45:03.398520 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.483874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.483922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.483936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.483952 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.483962 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.586032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.586726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.586770 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.586800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.586819 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.689576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.689620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.689629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.689645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.689655 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.791721 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.791776 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.791788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.791805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.791818 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.894482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.894543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.894558 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.894578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.894591 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.997563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.997608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.997618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.997634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.997644 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.100125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.100170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.100181 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.100196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.100210 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.202470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.202507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.202516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.202529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.202542 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.305191 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.305260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.305278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.305301 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.305319 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.379576 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 11:41:10.436905223 +0000 UTC Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.409999 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.410105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.410118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.410134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.410148 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.514037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.514099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.514112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.514128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.514138 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.616801 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.616883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.616906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.617002 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.617121 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.719910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.719974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.720013 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.720030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.720041 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.822818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.822860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.822869 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.822884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.822894 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.926638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.926707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.926719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.926739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.926753 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.029892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.029957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.029971 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.029991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.030007 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.138348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.138392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.138401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.138417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.138426 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.240634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.240686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.240696 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.240710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.240719 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.342665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.342867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.342937 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.343036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.343141 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.380117 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:52:07.119457429 +0000 UTC Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.397453 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.397467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.397492 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:05 crc kubenswrapper[4793]: E0130 13:45:05.398027 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:05 crc kubenswrapper[4793]: E0130 13:45:05.397682 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:05 crc kubenswrapper[4793]: E0130 13:45:05.398115 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.397524 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:05 crc kubenswrapper[4793]: E0130 13:45:05.398202 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.445278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.445322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.445341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.445360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.445371 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.547559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.547603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.547614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.547628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.547638 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.650278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.650318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.650326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.650340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.650350 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.752843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.752894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.752906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.752924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.752939 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.856139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.856222 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.856244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.856271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.856295 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.959408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.959463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.959475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.959496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.959510 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.062162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.062239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.062262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.062292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.062316 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.165264 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.165315 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.165325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.165342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.165351 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.268206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.268472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.268548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.268619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.268688 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.371120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.371171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.371183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.371201 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.371216 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.381225 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:29:33.63224149 +0000 UTC Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.473944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.473992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.474007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.474022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.474033 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.576176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.576209 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.576240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.576255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.576266 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.679220 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.679280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.679292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.679310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.679323 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.781254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.781306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.781317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.781335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.781347 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.883902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.883946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.883957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.883973 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.883986 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.987227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.987288 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.987305 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.987328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.987346 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.090229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.090267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.090276 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.090290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.090300 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.192258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.192302 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.192313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.192331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.192340 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.298180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.298231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.298248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.298266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.298282 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.381596 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:05:05.811136369 +0000 UTC Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.398359 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.398498 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:07 crc kubenswrapper[4793]: E0130 13:45:07.398602 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:07 crc kubenswrapper[4793]: E0130 13:45:07.398772 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.398404 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.400204 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:07 crc kubenswrapper[4793]: E0130 13:45:07.400317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:07 crc kubenswrapper[4793]: E0130 13:45:07.400392 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.400741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.400786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.400799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.400815 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.400827 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.503408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.503642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.503712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.503810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.503927 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.606511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.606552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.606564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.606579 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.606591 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.708737 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.708772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.708781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.708795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.708807 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.811403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.811694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.811761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.811835 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.811904 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.914714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.914766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.914779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.914796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.914805 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.016690 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.016732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.016744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.016759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.016770 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.119475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.119560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.119588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.119618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.119635 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.221924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.221964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.221975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.221991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.222002 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.324213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.324529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.324652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.324750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.324836 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.382497 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 02:09:59.891365044 +0000 UTC Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.398760 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:45:08 crc kubenswrapper[4793]: E0130 13:45:08.398941 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.428172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.428739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.428748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.428763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.428772 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.530562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.531079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.531190 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.531286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.531361 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.634327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.634373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.634381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.634397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.634409 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.736807 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.736863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.736874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.736887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.736897 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.839096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.839132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.839141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.839154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.839164 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.942080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.942133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.942172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.942188 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.942197 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.997829 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/1.log" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.998255 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/0.log" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.998298 4793 generic.go:334] "Generic (PLEG): container finished" podID="3e8d16db-eb58-4895-8c24-47d6f12b1ea4" containerID="95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d" exitCode=1 Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.998328 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerDied","Data":"95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.998361 4793 scope.go:117] "RemoveContainer" containerID="9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.998814 4793 scope.go:117] "RemoveContainer" containerID="95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d" Jan 30 13:45:08 crc kubenswrapper[4793]: E0130 13:45:08.999082 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2ssnl_openshift-multus(3e8d16db-eb58-4895-8c24-47d6f12b1ea4)\"" pod="openshift-multus/multus-2ssnl" podUID="3e8d16db-eb58-4895-8c24-47d6f12b1ea4" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.044870 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.046022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.046069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.046086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.046096 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.046278 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=22.046251114 podStartE2EDuration="22.046251114s" podCreationTimestamp="2026-01-30 13:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.030836197 +0000 UTC m=+119.732184708" watchObservedRunningTime="2026-01-30 13:45:09.046251114 +0000 UTC m=+119.747599605" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.097994 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" podStartSLOduration=97.097969586 podStartE2EDuration="1m37.097969586s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.068177849 +0000 UTC m=+119.769526370" watchObservedRunningTime="2026-01-30 13:45:09.097969586 +0000 UTC m=+119.799318097" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.137834 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=97.137813572 podStartE2EDuration="1m37.137813572s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.117411066 +0000 UTC m=+119.818759577" watchObservedRunningTime="2026-01-30 13:45:09.137813572 +0000 UTC m=+119.839162073" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.149388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.149600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.149737 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.149810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.149881 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.162373 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mbqcp" podStartSLOduration=98.162355303 podStartE2EDuration="1m38.162355303s" podCreationTimestamp="2026-01-30 13:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.147776578 +0000 UTC m=+119.849125069" watchObservedRunningTime="2026-01-30 13:45:09.162355303 +0000 UTC m=+119.863703794" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.175738 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podStartSLOduration=98.175716478 podStartE2EDuration="1m38.175716478s" podCreationTimestamp="2026-01-30 13:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.163659527 +0000 UTC m=+119.865008028" watchObservedRunningTime="2026-01-30 13:45:09.175716478 +0000 UTC m=+119.877064969" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.202584 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pxcll" podStartSLOduration=97.202555139 podStartE2EDuration="1m37.202555139s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.202096567 +0000 UTC m=+119.903445078" watchObservedRunningTime="2026-01-30 13:45:09.202555139 +0000 UTC m=+119.903903630" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.224347 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=69.22432726 podStartE2EDuration="1m9.22432726s" podCreationTimestamp="2026-01-30 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.224185126 +0000 UTC m=+119.925533627" watchObservedRunningTime="2026-01-30 13:45:09.22432726 +0000 UTC m=+119.925675751" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.234011 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.233990559 podStartE2EDuration="28.233990559s" podCreationTimestamp="2026-01-30 13:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.233941707 +0000 UTC m=+119.935290198" watchObservedRunningTime="2026-01-30 13:45:09.233990559 +0000 UTC m=+119.935339050" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.249075 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=94.249041476 podStartE2EDuration="1m34.249041476s" podCreationTimestamp="2026-01-30 13:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.248982154 +0000 UTC m=+119.950330645" watchObservedRunningTime="2026-01-30 13:45:09.249041476 +0000 UTC m=+119.950389967" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.252035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.252082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.252100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.252114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.252123 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.354433 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.354678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.354806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.354941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.355140 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.382868 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:31:25.356830409 +0000 UTC Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.397314 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.397314 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:09 crc kubenswrapper[4793]: E0130 13:45:09.397665 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.397379 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.397434 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:09 crc kubenswrapper[4793]: E0130 13:45:09.397818 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:09 crc kubenswrapper[4793]: E0130 13:45:09.397900 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:09 crc kubenswrapper[4793]: E0130 13:45:09.397966 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.458635 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.458676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.458687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.458702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.458712 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.561285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.561326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.561351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.561374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.561389 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.664973 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.665042 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.665099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.665130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.665150 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.767877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.767940 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.767959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.767982 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.767999 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.872382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.872445 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.872462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.872485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.872504 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.975404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.975478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.975490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.975507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.975519 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.003615 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/1.log" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.077531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.077839 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.077937 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.078034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.078126 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:10Z","lastTransitionTime":"2026-01-30T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.180829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.180894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.180905 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.180917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.180929 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:10Z","lastTransitionTime":"2026-01-30T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.283338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.283373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.283402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.283418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.283429 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:10Z","lastTransitionTime":"2026-01-30T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.384854 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:30:00.403472006 +0000 UTC Jan 30 13:45:10 crc kubenswrapper[4793]: E0130 13:45:10.384897 4793 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 13:45:10 crc kubenswrapper[4793]: E0130 13:45:10.487996 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.385174 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:48:24.650629195 +0000 UTC Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.397620 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.397660 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.397696 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.397707 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:11 crc kubenswrapper[4793]: E0130 13:45:11.397781 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:11 crc kubenswrapper[4793]: E0130 13:45:11.397849 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:11 crc kubenswrapper[4793]: E0130 13:45:11.397898 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:11 crc kubenswrapper[4793]: E0130 13:45:11.397941 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.400849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.400873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.400890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.400903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.400913 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:11Z","lastTransitionTime":"2026-01-30T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.437011 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" podStartSLOduration=98.436987144 podStartE2EDuration="1m38.436987144s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.30943217 +0000 UTC m=+120.010780651" watchObservedRunningTime="2026-01-30 13:45:11.436987144 +0000 UTC m=+122.138335635" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.437483 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms"] Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.437935 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.440305 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.440319 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.441112 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.442763 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.503665 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1b5cd2-75f2-4d59-99f5-3ea731377918-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.503718 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c1b5cd2-75f2-4d59-99f5-3ea731377918-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.503746 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c1b5cd2-75f2-4d59-99f5-3ea731377918-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.503809 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6c1b5cd2-75f2-4d59-99f5-3ea731377918-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.503842 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6c1b5cd2-75f2-4d59-99f5-3ea731377918-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.604797 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1b5cd2-75f2-4d59-99f5-3ea731377918-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.604861 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c1b5cd2-75f2-4d59-99f5-3ea731377918-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.604894 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c1b5cd2-75f2-4d59-99f5-3ea731377918-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.604954 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6c1b5cd2-75f2-4d59-99f5-3ea731377918-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.604986 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6c1b5cd2-75f2-4d59-99f5-3ea731377918-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.605178 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6c1b5cd2-75f2-4d59-99f5-3ea731377918-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.605240 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6c1b5cd2-75f2-4d59-99f5-3ea731377918-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.605939 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c1b5cd2-75f2-4d59-99f5-3ea731377918-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.609805 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1b5cd2-75f2-4d59-99f5-3ea731377918-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.636841 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c1b5cd2-75f2-4d59-99f5-3ea731377918-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.753606 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: W0130 13:45:11.774564 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c1b5cd2_75f2_4d59_99f5_3ea731377918.slice/crio-d23c7de44a4aaf5a362c76c6179c26d95beb68f2bf13a3828a3180e8cc545473 WatchSource:0}: Error finding container d23c7de44a4aaf5a362c76c6179c26d95beb68f2bf13a3828a3180e8cc545473: Status 404 returned error can't find the container with id d23c7de44a4aaf5a362c76c6179c26d95beb68f2bf13a3828a3180e8cc545473 Jan 30 13:45:12 crc kubenswrapper[4793]: I0130 13:45:12.011575 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" event={"ID":"6c1b5cd2-75f2-4d59-99f5-3ea731377918","Type":"ContainerStarted","Data":"99a536b8a2bdd47c0042557739c6ab73621e64b427a46087402619c292519bf1"} Jan 30 13:45:12 crc kubenswrapper[4793]: I0130 13:45:12.011636 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" event={"ID":"6c1b5cd2-75f2-4d59-99f5-3ea731377918","Type":"ContainerStarted","Data":"d23c7de44a4aaf5a362c76c6179c26d95beb68f2bf13a3828a3180e8cc545473"} Jan 30 13:45:12 crc kubenswrapper[4793]: I0130 13:45:12.026198 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" podStartSLOduration=100.026177646 podStartE2EDuration="1m40.026177646s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:12.025465567 +0000 UTC m=+122.726814088" watchObservedRunningTime="2026-01-30 13:45:12.026177646 +0000 UTC m=+122.727526137" Jan 30 13:45:12 crc kubenswrapper[4793]: I0130 13:45:12.386337 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:37:18.066101981 +0000 UTC Jan 30 13:45:12 crc kubenswrapper[4793]: I0130 13:45:12.386385 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 13:45:12 crc kubenswrapper[4793]: I0130 13:45:12.395216 4793 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 13:45:13 crc kubenswrapper[4793]: I0130 13:45:13.397707 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:13 crc kubenswrapper[4793]: I0130 13:45:13.397741 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:13 crc kubenswrapper[4793]: I0130 13:45:13.397706 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:13 crc kubenswrapper[4793]: E0130 13:45:13.397834 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:13 crc kubenswrapper[4793]: I0130 13:45:13.397817 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:13 crc kubenswrapper[4793]: E0130 13:45:13.397927 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:13 crc kubenswrapper[4793]: E0130 13:45:13.397977 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:13 crc kubenswrapper[4793]: E0130 13:45:13.398076 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:15 crc kubenswrapper[4793]: I0130 13:45:15.397680 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:15 crc kubenswrapper[4793]: I0130 13:45:15.397764 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:15 crc kubenswrapper[4793]: I0130 13:45:15.397675 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:15 crc kubenswrapper[4793]: E0130 13:45:15.397804 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:15 crc kubenswrapper[4793]: E0130 13:45:15.397888 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:15 crc kubenswrapper[4793]: E0130 13:45:15.397959 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:15 crc kubenswrapper[4793]: I0130 13:45:15.398732 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:15 crc kubenswrapper[4793]: E0130 13:45:15.398905 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:15 crc kubenswrapper[4793]: E0130 13:45:15.488916 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:17 crc kubenswrapper[4793]: I0130 13:45:17.397883 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:17 crc kubenswrapper[4793]: E0130 13:45:17.398304 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:17 crc kubenswrapper[4793]: I0130 13:45:17.397962 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:17 crc kubenswrapper[4793]: E0130 13:45:17.398391 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:17 crc kubenswrapper[4793]: I0130 13:45:17.397979 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:17 crc kubenswrapper[4793]: E0130 13:45:17.398452 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:17 crc kubenswrapper[4793]: I0130 13:45:17.397930 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:17 crc kubenswrapper[4793]: E0130 13:45:17.398514 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:19 crc kubenswrapper[4793]: I0130 13:45:19.397985 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:19 crc kubenswrapper[4793]: E0130 13:45:19.398542 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:19 crc kubenswrapper[4793]: I0130 13:45:19.398041 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:19 crc kubenswrapper[4793]: E0130 13:45:19.398696 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:19 crc kubenswrapper[4793]: I0130 13:45:19.398138 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:19 crc kubenswrapper[4793]: E0130 13:45:19.398796 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:19 crc kubenswrapper[4793]: I0130 13:45:19.398002 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:19 crc kubenswrapper[4793]: E0130 13:45:19.398887 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:20 crc kubenswrapper[4793]: E0130 13:45:20.489479 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:21 crc kubenswrapper[4793]: I0130 13:45:21.397679 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:21 crc kubenswrapper[4793]: I0130 13:45:21.397730 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:21 crc kubenswrapper[4793]: I0130 13:45:21.397856 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:21 crc kubenswrapper[4793]: E0130 13:45:21.397849 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:21 crc kubenswrapper[4793]: I0130 13:45:21.397967 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:21 crc kubenswrapper[4793]: E0130 13:45:21.398187 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:21 crc kubenswrapper[4793]: E0130 13:45:21.398212 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:21 crc kubenswrapper[4793]: E0130 13:45:21.398568 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:22 crc kubenswrapper[4793]: I0130 13:45:22.398911 4793 scope.go:117] "RemoveContainer" containerID="95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.048359 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/1.log" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.048405 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerStarted","Data":"bfdf4f4d87575310b5571ad8d96eada9a0f6637ad77b4d2c2367210b2d703abd"} Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.069336 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2ssnl" podStartSLOduration=111.069307134 podStartE2EDuration="1m51.069307134s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:23.067732213 +0000 UTC m=+133.769080744" watchObservedRunningTime="2026-01-30 13:45:23.069307134 +0000 UTC m=+133.770655665" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.397676 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.397700 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.397805 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:23 crc kubenswrapper[4793]: E0130 13:45:23.397797 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.397911 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:23 crc kubenswrapper[4793]: E0130 13:45:23.398018 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:23 crc kubenswrapper[4793]: E0130 13:45:23.398141 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:23 crc kubenswrapper[4793]: E0130 13:45:23.398206 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.398860 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:45:23 crc kubenswrapper[4793]: E0130 13:45:23.399024 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:45:25 crc kubenswrapper[4793]: I0130 13:45:25.397436 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:25 crc kubenswrapper[4793]: I0130 13:45:25.397534 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:25 crc kubenswrapper[4793]: I0130 13:45:25.397627 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:25 crc kubenswrapper[4793]: E0130 13:45:25.397632 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:25 crc kubenswrapper[4793]: E0130 13:45:25.397784 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:25 crc kubenswrapper[4793]: I0130 13:45:25.397831 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:25 crc kubenswrapper[4793]: E0130 13:45:25.397952 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:25 crc kubenswrapper[4793]: E0130 13:45:25.398098 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:25 crc kubenswrapper[4793]: E0130 13:45:25.491002 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:27 crc kubenswrapper[4793]: I0130 13:45:27.397894 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:27 crc kubenswrapper[4793]: I0130 13:45:27.397935 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:27 crc kubenswrapper[4793]: E0130 13:45:27.398846 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:27 crc kubenswrapper[4793]: I0130 13:45:27.397982 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:27 crc kubenswrapper[4793]: I0130 13:45:27.397982 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:27 crc kubenswrapper[4793]: E0130 13:45:27.399159 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:27 crc kubenswrapper[4793]: E0130 13:45:27.399177 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:27 crc kubenswrapper[4793]: E0130 13:45:27.399275 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:29 crc kubenswrapper[4793]: I0130 13:45:29.398246 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:29 crc kubenswrapper[4793]: I0130 13:45:29.398283 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:29 crc kubenswrapper[4793]: I0130 13:45:29.398290 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:29 crc kubenswrapper[4793]: E0130 13:45:29.399436 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:29 crc kubenswrapper[4793]: E0130 13:45:29.399290 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:29 crc kubenswrapper[4793]: I0130 13:45:29.398407 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:29 crc kubenswrapper[4793]: E0130 13:45:29.399589 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:29 crc kubenswrapper[4793]: E0130 13:45:29.399735 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:30 crc kubenswrapper[4793]: E0130 13:45:30.491636 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:31 crc kubenswrapper[4793]: I0130 13:45:31.397245 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:31 crc kubenswrapper[4793]: I0130 13:45:31.397269 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:31 crc kubenswrapper[4793]: E0130 13:45:31.397611 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:31 crc kubenswrapper[4793]: I0130 13:45:31.397283 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:31 crc kubenswrapper[4793]: E0130 13:45:31.397707 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:31 crc kubenswrapper[4793]: I0130 13:45:31.397287 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:31 crc kubenswrapper[4793]: E0130 13:45:31.397858 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:31 crc kubenswrapper[4793]: E0130 13:45:31.397795 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:33 crc kubenswrapper[4793]: I0130 13:45:33.397811 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:33 crc kubenswrapper[4793]: I0130 13:45:33.397872 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:33 crc kubenswrapper[4793]: I0130 13:45:33.397900 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:33 crc kubenswrapper[4793]: I0130 13:45:33.397839 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:33 crc kubenswrapper[4793]: E0130 13:45:33.397965 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:33 crc kubenswrapper[4793]: E0130 13:45:33.398122 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:33 crc kubenswrapper[4793]: E0130 13:45:33.398186 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:33 crc kubenswrapper[4793]: E0130 13:45:33.398299 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:35 crc kubenswrapper[4793]: I0130 13:45:35.398220 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:35 crc kubenswrapper[4793]: I0130 13:45:35.398318 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:35 crc kubenswrapper[4793]: I0130 13:45:35.398350 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:35 crc kubenswrapper[4793]: I0130 13:45:35.398354 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:35 crc kubenswrapper[4793]: E0130 13:45:35.399201 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:35 crc kubenswrapper[4793]: E0130 13:45:35.399265 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:35 crc kubenswrapper[4793]: E0130 13:45:35.399439 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:35 crc kubenswrapper[4793]: E0130 13:45:35.399547 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:35 crc kubenswrapper[4793]: E0130 13:45:35.494106 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:37 crc kubenswrapper[4793]: I0130 13:45:37.397915 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:37 crc kubenswrapper[4793]: I0130 13:45:37.398107 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:37 crc kubenswrapper[4793]: I0130 13:45:37.398159 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:37 crc kubenswrapper[4793]: E0130 13:45:37.398297 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:37 crc kubenswrapper[4793]: I0130 13:45:37.398307 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:37 crc kubenswrapper[4793]: E0130 13:45:37.398739 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:37 crc kubenswrapper[4793]: E0130 13:45:37.398839 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:37 crc kubenswrapper[4793]: E0130 13:45:37.398910 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:37 crc kubenswrapper[4793]: I0130 13:45:37.399279 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:45:38 crc kubenswrapper[4793]: I0130 13:45:38.099033 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/3.log" Jan 30 13:45:38 crc kubenswrapper[4793]: I0130 13:45:38.102429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} Jan 30 13:45:38 crc kubenswrapper[4793]: I0130 13:45:38.103229 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:45:38 crc kubenswrapper[4793]: I0130 13:45:38.130602 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podStartSLOduration=126.130580603 podStartE2EDuration="2m6.130580603s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:38.129403812 +0000 UTC m=+148.830752323" watchObservedRunningTime="2026-01-30 13:45:38.130580603 +0000 UTC m=+148.831929104" Jan 30 13:45:38 crc kubenswrapper[4793]: I0130 13:45:38.681288 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xfcvw"] Jan 30 13:45:38 crc kubenswrapper[4793]: I0130 13:45:38.681629 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:38 crc kubenswrapper[4793]: E0130 13:45:38.681710 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:39 crc kubenswrapper[4793]: I0130 13:45:39.397754 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:39 crc kubenswrapper[4793]: I0130 13:45:39.397793 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:39 crc kubenswrapper[4793]: E0130 13:45:39.397916 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:39 crc kubenswrapper[4793]: I0130 13:45:39.397973 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:39 crc kubenswrapper[4793]: E0130 13:45:39.398080 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:39 crc kubenswrapper[4793]: E0130 13:45:39.398142 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:40 crc kubenswrapper[4793]: I0130 13:45:40.398402 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:40 crc kubenswrapper[4793]: E0130 13:45:40.399627 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:40 crc kubenswrapper[4793]: E0130 13:45:40.494507 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.322323 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.322567 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:47:43.322532907 +0000 UTC m=+274.023881398 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.322664 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.322706 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.322758 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.322885 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.322918 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.322945 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.323004 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.323029 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.322957 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:47:43.322947218 +0000 UTC m=+274.024295709 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.323139 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:47:43.323109622 +0000 UTC m=+274.024458143 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.323169 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:47:43.323157443 +0000 UTC m=+274.024505964 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.397751 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.397879 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.398384 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.397912 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.398847 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.400005 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.423730 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.423948 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.423978 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.423993 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.424106 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:47:43.424059113 +0000 UTC m=+274.125407604 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:45:42 crc kubenswrapper[4793]: I0130 13:45:42.397786 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:42 crc kubenswrapper[4793]: E0130 13:45:42.398206 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:42 crc kubenswrapper[4793]: I0130 13:45:42.413352 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:45:42 crc kubenswrapper[4793]: I0130 13:45:42.413422 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:45:43 crc kubenswrapper[4793]: I0130 13:45:43.397761 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:43 crc kubenswrapper[4793]: I0130 13:45:43.397824 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:43 crc kubenswrapper[4793]: E0130 13:45:43.397906 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:43 crc kubenswrapper[4793]: I0130 13:45:43.397787 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:43 crc kubenswrapper[4793]: E0130 13:45:43.398024 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:43 crc kubenswrapper[4793]: E0130 13:45:43.398122 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:44 crc kubenswrapper[4793]: I0130 13:45:44.397798 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:44 crc kubenswrapper[4793]: E0130 13:45:44.398109 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:45 crc kubenswrapper[4793]: I0130 13:45:45.397412 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:45 crc kubenswrapper[4793]: I0130 13:45:45.397465 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:45 crc kubenswrapper[4793]: I0130 13:45:45.397502 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:45 crc kubenswrapper[4793]: E0130 13:45:45.397569 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:45 crc kubenswrapper[4793]: E0130 13:45:45.397730 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:45 crc kubenswrapper[4793]: E0130 13:45:45.397815 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:46 crc kubenswrapper[4793]: I0130 13:45:46.397986 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:46 crc kubenswrapper[4793]: I0130 13:45:46.400484 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 13:45:46 crc kubenswrapper[4793]: I0130 13:45:46.409905 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.401483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.401483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.401488 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.403746 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.404270 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.404747 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.404796 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 13:45:51 crc kubenswrapper[4793]: I0130 13:45:51.274587 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.487417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.529294 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.529856 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.530428 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.530974 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.532337 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.532491 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.532632 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-65rgb"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.533032 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.533470 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ztcbh"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.534025 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.535263 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.535690 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.536407 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cwwfj"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.536568 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.536840 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.536925 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.537369 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-sd6hs"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.537769 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.539732 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zrj8g"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.540114 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-kknzc"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.540203 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.540394 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.541092 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.549761 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-client\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.549851 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-ca\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.549896 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-config\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.549932 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs25n\" (UniqueName: \"kubernetes.io/projected/ea703d52-c081-418f-9343-61b68296314f-kube-api-access-qs25n\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.549960 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-trusted-ca\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.549988 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3806824c-28d3-47d4-b33f-01d9ab1239b8-serving-cert\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550013 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl8wz\" (UniqueName: \"kubernetes.io/projected/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-kube-api-access-wl8wz\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550040 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-service-ca\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550223 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-encryption-config\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550327 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-etcd-client\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550351 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550619 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-image-import-ca\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550649 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-client-ca\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550679 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99444dfd-71c4-4d2d-a94a-cecc7a740423-metrics-tls\") pod \"dns-operator-744455d44c-ztcbh\" (UID: \"99444dfd-71c4-4d2d-a94a-cecc7a740423\") " pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550700 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nhrw\" (UniqueName: \"kubernetes.io/projected/99444dfd-71c4-4d2d-a94a-cecc7a740423-kube-api-access-5nhrw\") pod \"dns-operator-744455d44c-ztcbh\" (UID: \"99444dfd-71c4-4d2d-a94a-cecc7a740423\") " pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550724 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-serving-cert\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550948 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-config\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550980 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-audit\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551039 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mhtj\" (UniqueName: \"kubernetes.io/projected/7dbc78d6-c879-4284-89b6-169d359839bf-kube-api-access-9mhtj\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551103 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-serving-cert\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551179 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea703d52-c081-418f-9343-61b68296314f-node-pullsecrets\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551258 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-oauth-config\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551294 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-config\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551355 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4z45\" (UniqueName: \"kubernetes.io/projected/3806824c-28d3-47d4-b33f-01d9ab1239b8-kube-api-access-n4z45\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551376 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea703d52-c081-418f-9343-61b68296314f-audit-dir\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551434 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-serving-cert\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551458 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9kbq\" (UniqueName: \"kubernetes.io/projected/6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2-kube-api-access-r9kbq\") pod \"downloads-7954f5f757-sd6hs\" (UID: \"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2\") " pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-config\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551544 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbc78d6-c879-4284-89b6-169d359839bf-serving-cert\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551607 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-etcd-serving-ca\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.556999 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.557289 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.557555 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.558311 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.558495 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.557116 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.563023 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.563428 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.563551 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.567906 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.567990 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.567910 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.568418 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.568530 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.568695 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.569315 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.569826 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.570699 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.572190 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.576457 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.576779 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.587435 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.587611 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.589067 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-56g7n"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.589537 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.591358 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.591777 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.591941 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592188 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592253 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592311 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592526 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592567 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592670 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592764 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592850 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592987 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.593169 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.594359 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.594440 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.594600 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.594647 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.594691 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.594898 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595109 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595122 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595241 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595400 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595553 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595738 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595932 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.596135 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.596731 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.598752 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.599232 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.599701 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qsdzw"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.599907 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.599958 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.600203 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.600470 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.603272 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.605798 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5l76j"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.606255 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.606845 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.608351 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.608816 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.609113 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.609470 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.612997 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-899ps"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.613710 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.614093 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.614143 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.615302 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.615416 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2mcj"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.615837 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.615963 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.616408 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.617355 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.617621 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.617705 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.618163 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.619477 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2lv2p"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.619954 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.620599 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.622674 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pfnjs"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.623143 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.625512 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.630559 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v476x"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.631306 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.631689 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.631942 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.632233 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.632820 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.634380 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.634386 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.649197 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.649293 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.649434 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.650184 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.650427 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.650667 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.650905 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.651281 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.651445 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.651670 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.651915 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.652187 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.652211 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.654309 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.654546 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mnzcq"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.655537 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.657430 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.660719 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-ca\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.660771 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.660811 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.660919 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b72b54ef-6699-4091-b47d-f05f7c85adb2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mnzcq\" (UID: \"b72b54ef-6699-4091-b47d-f05f7c85adb2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.660999 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-stats-auth\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661038 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661068 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mbjm\" (UniqueName: \"kubernetes.io/projected/b72b54ef-6699-4091-b47d-f05f7c85adb2-kube-api-access-2mbjm\") pod \"multus-admission-controller-857f4d67dd-mnzcq\" (UID: \"b72b54ef-6699-4091-b47d-f05f7c85adb2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661115 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-encryption-config\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661158 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tk6n\" (UniqueName: \"kubernetes.io/projected/7fc1ca51-0362-4492-ba07-8c5413c39deb-kube-api-access-9tk6n\") pod \"cluster-samples-operator-665b6dd947-7x8ff\" (UID: \"7fc1ca51-0362-4492-ba07-8c5413c39deb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661246 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661357 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-serving-cert\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661397 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r94vd\" (UniqueName: \"kubernetes.io/projected/d2aa0043-dc77-41ca-a95f-2d119ed48053-kube-api-access-r94vd\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661416 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-default-certificate\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661463 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661585 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w2cd\" (UniqueName: \"kubernetes.io/projected/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-kube-api-access-4w2cd\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661600 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-config\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661620 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-service-ca-bundle\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661651 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-metrics-certs\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661669 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-client-ca\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661687 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-dir\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661701 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661815 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.662833 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-config\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.664119 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-ca\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.664908 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661814 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-config\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665471 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665486 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7fc1ca51-0362-4492-ba07-8c5413c39deb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7x8ff\" (UID: \"7fc1ca51-0362-4492-ba07-8c5413c39deb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665582 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46caba5b-4a87-480a-ac56-437102a31802-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665663 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665692 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/268883cf-a27e-4b69-bd41-18f0a35c3e6a-serving-cert\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665848 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs25n\" (UniqueName: \"kubernetes.io/projected/ea703d52-c081-418f-9343-61b68296314f-kube-api-access-qs25n\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665936 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kpjf\" (UniqueName: \"kubernetes.io/projected/e2a53aac-c9f7-465c-821b-cd62aa893d13-kube-api-access-9kpjf\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665994 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfgsg\" (UniqueName: \"kubernetes.io/projected/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-kube-api-access-wfgsg\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666023 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-etcd-client\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666174 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46caba5b-4a87-480a-ac56-437102a31802-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666241 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-policies\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666285 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666344 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-trusted-ca\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666369 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnpxc\" (UniqueName: \"kubernetes.io/projected/46caba5b-4a87-480a-ac56-437102a31802-kube-api-access-lnpxc\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666516 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666448 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/daa9599a-67b0-421e-8add-0656c0b98af2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666889 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2pcw\" (UniqueName: \"kubernetes.io/projected/daa9599a-67b0-421e-8add-0656c0b98af2-kube-api-access-p2pcw\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666954 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3806824c-28d3-47d4-b33f-01d9ab1239b8-serving-cert\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.667001 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-config\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.667109 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl8wz\" (UniqueName: \"kubernetes.io/projected/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-kube-api-access-wl8wz\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.667154 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-service-ca\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.668592 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.669298 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.669322 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.669396 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.669444 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.669543 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.669555 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.699835 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.700005 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.700482 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-trusted-ca\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.701429 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.701579 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702314 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702347 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zd5lq"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702416 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702487 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702600 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702824 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702842 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.703094 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.703143 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.703352 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.703814 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704136 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704250 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.685360 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51800ff9-fe19-4a50-a272-be1de629ec82-config\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704527 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf6fx\" (UniqueName: \"kubernetes.io/projected/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-kube-api-access-jf6fx\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704549 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-auth-proxy-config\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704570 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-audit-policies\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704587 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd7922e2-3b17-4212-94b3-2405e20841ad-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704607 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-service-ca\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704623 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-encryption-config\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704638 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704658 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-etcd-client\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704672 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704686 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704702 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a53aac-c9f7-465c-821b-cd62aa893d13-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704731 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-image-import-ca\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704746 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ft4\" (UniqueName: \"kubernetes.io/projected/4e62edf8-f827-4fa6-8b40-563c821707ae-kube-api-access-82ft4\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704763 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704780 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704814 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlvm8\" (UniqueName: \"kubernetes.io/projected/c44b9aaf-de3a-48a8-8760-5553255887ac-kube-api-access-jlvm8\") pod \"migrator-59844c95c7-q5442\" (UID: \"c44b9aaf-de3a-48a8-8760-5553255887ac\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704829 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1faa169d-53de-456e-8f99-f93dc2772719-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704842 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce07df7-af19-4334-b704-818df47958a1-serving-cert\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704860 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-client-ca\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704890 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704915 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-config\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704932 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99444dfd-71c4-4d2d-a94a-cecc7a740423-metrics-tls\") pod \"dns-operator-744455d44c-ztcbh\" (UID: \"99444dfd-71c4-4d2d-a94a-cecc7a740423\") " pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704947 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nhrw\" (UniqueName: \"kubernetes.io/projected/99444dfd-71c4-4d2d-a94a-cecc7a740423-kube-api-access-5nhrw\") pod \"dns-operator-744455d44c-ztcbh\" (UID: \"99444dfd-71c4-4d2d-a94a-cecc7a740423\") " pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704963 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704976 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-config\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704989 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmq77\" (UniqueName: \"kubernetes.io/projected/268883cf-a27e-4b69-bd41-18f0a35c3e6a-kube-api-access-xmq77\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705003 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd7922e2-3b17-4212-94b3-2405e20841ad-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705033 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-serving-cert\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705060 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705077 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51800ff9-fe19-4a50-a272-be1de629ec82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705091 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vhgb\" (UniqueName: \"kubernetes.io/projected/4a64abca-3318-4208-8edb-1474e0ba5f2f-kube-api-access-4vhgb\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705110 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-config\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-audit\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705138 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2aa0043-dc77-41ca-a95f-2d119ed48053-audit-dir\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705153 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-config\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705170 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mhtj\" (UniqueName: \"kubernetes.io/projected/7dbc78d6-c879-4284-89b6-169d359839bf-kube-api-access-9mhtj\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705186 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-serving-cert\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705209 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-trusted-ca-bundle\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705224 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-machine-approver-tls\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705239 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckwgj\" (UniqueName: \"kubernetes.io/projected/4ce07df7-af19-4334-b704-818df47958a1-kube-api-access-ckwgj\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705255 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea703d52-c081-418f-9343-61b68296314f-node-pullsecrets\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705268 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e62edf8-f827-4fa6-8b40-563c821707ae-serving-cert\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705281 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daa9599a-67b0-421e-8add-0656c0b98af2-trusted-ca\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705296 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-oauth-config\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705321 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-config\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705354 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4z45\" (UniqueName: \"kubernetes.io/projected/3806824c-28d3-47d4-b33f-01d9ab1239b8-kube-api-access-n4z45\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705378 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea703d52-c081-418f-9343-61b68296314f-audit-dir\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705402 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705427 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705446 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a53aac-c9f7-465c-821b-cd62aa893d13-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705466 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd7922e2-3b17-4212-94b3-2405e20841ad-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705491 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-serving-cert\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9kbq\" (UniqueName: \"kubernetes.io/projected/6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2-kube-api-access-r9kbq\") pod \"downloads-7954f5f757-sd6hs\" (UID: \"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2\") " pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705530 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-oauth-serving-cert\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705548 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51800ff9-fe19-4a50-a272-be1de629ec82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705571 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1faa169d-53de-456e-8f99-f93dc2772719-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-config\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705614 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbc78d6-c879-4284-89b6-169d359839bf-serving-cert\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705635 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1faa169d-53de-456e-8f99-f93dc2772719-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705660 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-etcd-serving-ca\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705687 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlclv\" (UniqueName: \"kubernetes.io/projected/cd7922e2-3b17-4212-94b3-2405e20841ad-kube-api-access-wlclv\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-client\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705729 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-service-ca-bundle\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705751 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/daa9599a-67b0-421e-8add-0656c0b98af2-metrics-tls\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705772 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ce07df7-af19-4334-b704-818df47958a1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.706440 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-service-ca\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.706837 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.711172 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.713616 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.714603 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.715037 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.715286 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.715397 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.716793 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.717061 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.720648 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea703d52-c081-418f-9343-61b68296314f-node-pullsecrets\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.721858 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-config\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.722309 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-audit\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.723125 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-etcd-serving-ca\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.724960 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.725410 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n9v6k"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.725697 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.726023 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.727714 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbc78d6-c879-4284-89b6-169d359839bf-serving-cert\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.727847 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea703d52-c081-418f-9343-61b68296314f-audit-dir\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.728316 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.728459 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.729462 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-config\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.731488 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.731525 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-image-import-ca\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.731904 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.732404 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-client-ca\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.732636 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.733329 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.734134 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-etcd-client\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.735744 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gsr67"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.736405 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5l76j"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.736484 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.736684 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.737144 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-encryption-config\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.738208 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.738384 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.738638 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-client\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.739080 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.739571 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-config\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.739624 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zrj8g"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.739851 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-serving-cert\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.742015 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-988dg"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.744673 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-oauth-config\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.745644 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.745664 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.745721 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qsdzw"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.745802 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.745869 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-serving-cert\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.746610 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-serving-cert\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.746651 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cwwfj"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.748964 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3806824c-28d3-47d4-b33f-01d9ab1239b8-serving-cert\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.749037 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ztcbh"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.749580 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99444dfd-71c4-4d2d-a94a-cecc7a740423-metrics-tls\") pod \"dns-operator-744455d44c-ztcbh\" (UID: \"99444dfd-71c4-4d2d-a94a-cecc7a740423\") " pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.752364 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2mcj"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.753988 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-65rgb"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.754483 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.755274 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.766807 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mnzcq"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.766876 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kknzc"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.773748 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.782407 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.784622 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.789109 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-56g7n"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.799741 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807410 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807470 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807503 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b72b54ef-6699-4091-b47d-f05f7c85adb2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mnzcq\" (UID: \"b72b54ef-6699-4091-b47d-f05f7c85adb2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807530 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-stats-auth\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807558 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-encryption-config\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807586 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tk6n\" (UniqueName: \"kubernetes.io/projected/7fc1ca51-0362-4492-ba07-8c5413c39deb-kube-api-access-9tk6n\") pod \"cluster-samples-operator-665b6dd947-7x8ff\" (UID: \"7fc1ca51-0362-4492-ba07-8c5413c39deb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807616 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807640 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807669 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mbjm\" (UniqueName: \"kubernetes.io/projected/b72b54ef-6699-4091-b47d-f05f7c85adb2-kube-api-access-2mbjm\") pod \"multus-admission-controller-857f4d67dd-mnzcq\" (UID: \"b72b54ef-6699-4091-b47d-f05f7c85adb2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807697 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r94vd\" (UniqueName: \"kubernetes.io/projected/d2aa0043-dc77-41ca-a95f-2d119ed48053-kube-api-access-r94vd\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807725 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-default-certificate\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807746 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807777 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-serving-cert\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807804 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w2cd\" (UniqueName: \"kubernetes.io/projected/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-kube-api-access-4w2cd\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807830 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-config\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807863 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-service-ca-bundle\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807886 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-metrics-certs\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807923 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-client-ca\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807953 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7fc1ca51-0362-4492-ba07-8c5413c39deb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7x8ff\" (UID: \"7fc1ca51-0362-4492-ba07-8c5413c39deb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807981 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46caba5b-4a87-480a-ac56-437102a31802-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808002 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808027 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-dir\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808073 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/268883cf-a27e-4b69-bd41-18f0a35c3e6a-serving-cert\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808139 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kpjf\" (UniqueName: \"kubernetes.io/projected/e2a53aac-c9f7-465c-821b-cd62aa893d13-kube-api-access-9kpjf\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808176 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46caba5b-4a87-480a-ac56-437102a31802-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808201 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-policies\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808227 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808255 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfgsg\" (UniqueName: \"kubernetes.io/projected/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-kube-api-access-wfgsg\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808279 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-etcd-client\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808306 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnpxc\" (UniqueName: \"kubernetes.io/projected/46caba5b-4a87-480a-ac56-437102a31802-kube-api-access-lnpxc\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808334 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/daa9599a-67b0-421e-8add-0656c0b98af2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2pcw\" (UniqueName: \"kubernetes.io/projected/daa9599a-67b0-421e-8add-0656c0b98af2-kube-api-access-p2pcw\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808388 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-config\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808414 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-service-ca\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808439 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51800ff9-fe19-4a50-a272-be1de629ec82-config\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808465 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf6fx\" (UniqueName: \"kubernetes.io/projected/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-kube-api-access-jf6fx\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808503 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-auth-proxy-config\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808529 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-audit-policies\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808555 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd7922e2-3b17-4212-94b3-2405e20841ad-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808578 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808607 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808637 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a53aac-c9f7-465c-821b-cd62aa893d13-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808668 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82ft4\" (UniqueName: \"kubernetes.io/projected/4e62edf8-f827-4fa6-8b40-563c821707ae-kube-api-access-82ft4\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808693 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808743 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808772 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1faa169d-53de-456e-8f99-f93dc2772719-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808796 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlvm8\" (UniqueName: \"kubernetes.io/projected/c44b9aaf-de3a-48a8-8760-5553255887ac-kube-api-access-jlvm8\") pod \"migrator-59844c95c7-q5442\" (UID: \"c44b9aaf-de3a-48a8-8760-5553255887ac\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808822 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-config\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808876 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce07df7-af19-4334-b704-818df47958a1-serving-cert\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808911 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808936 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-config\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808974 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51800ff9-fe19-4a50-a272-be1de629ec82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vhgb\" (UniqueName: \"kubernetes.io/projected/4a64abca-3318-4208-8edb-1474e0ba5f2f-kube-api-access-4vhgb\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809104 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmq77\" (UniqueName: \"kubernetes.io/projected/268883cf-a27e-4b69-bd41-18f0a35c3e6a-kube-api-access-xmq77\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809136 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd7922e2-3b17-4212-94b3-2405e20841ad-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809160 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2aa0043-dc77-41ca-a95f-2d119ed48053-audit-dir\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809201 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-config\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809244 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-machine-approver-tls\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809278 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckwgj\" (UniqueName: \"kubernetes.io/projected/4ce07df7-af19-4334-b704-818df47958a1-kube-api-access-ckwgj\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809319 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-trusted-ca-bundle\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809346 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daa9599a-67b0-421e-8add-0656c0b98af2-trusted-ca\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809370 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e62edf8-f827-4fa6-8b40-563c821707ae-serving-cert\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809428 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809454 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809495 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-oauth-serving-cert\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809534 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a53aac-c9f7-465c-821b-cd62aa893d13-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809558 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd7922e2-3b17-4212-94b3-2405e20841ad-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51800ff9-fe19-4a50-a272-be1de629ec82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809630 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1faa169d-53de-456e-8f99-f93dc2772719-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809675 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1faa169d-53de-456e-8f99-f93dc2772719-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809706 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlclv\" (UniqueName: \"kubernetes.io/projected/cd7922e2-3b17-4212-94b3-2405e20841ad-kube-api-access-wlclv\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809733 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ce07df7-af19-4334-b704-818df47958a1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809765 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-service-ca-bundle\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809791 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/daa9599a-67b0-421e-8add-0656c0b98af2-metrics-tls\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.814652 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-encryption-config\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.815888 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.816614 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a53aac-c9f7-465c-821b-cd62aa893d13-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.825845 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-config\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.825884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-serving-cert\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.827635 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-config\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.828218 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.828655 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.828879 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-client-ca\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.829422 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-service-ca\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.830277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.830793 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-auth-proxy-config\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.831521 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.836541 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pfnjs"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.836588 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-899ps"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.836667 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.838337 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-audit-policies\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.843097 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46caba5b-4a87-480a-ac56-437102a31802-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.851897 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1faa169d-53de-456e-8f99-f93dc2772719-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.852847 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd7922e2-3b17-4212-94b3-2405e20841ad-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.855162 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.855797 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.856364 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-config\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.857277 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.857313 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gsr67"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.857326 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.866995 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2aa0043-dc77-41ca-a95f-2d119ed48053-audit-dir\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.869432 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1faa169d-53de-456e-8f99-f93dc2772719-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.870600 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.874805 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-machine-approver-tls\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.874828 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.875346 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.875790 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd7922e2-3b17-4212-94b3-2405e20841ad-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.875835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-oauth-serving-cert\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.876288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce07df7-af19-4334-b704-818df47958a1-serving-cert\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.876337 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7fc1ca51-0362-4492-ba07-8c5413c39deb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7x8ff\" (UID: \"7fc1ca51-0362-4492-ba07-8c5413c39deb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.876666 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.876807 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-trusted-ca-bundle\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.877158 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ce07df7-af19-4334-b704-818df47958a1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.877318 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e62edf8-f827-4fa6-8b40-563c821707ae-serving-cert\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.877467 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-config\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.877611 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-etcd-client\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.877628 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-service-ca-bundle\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.877900 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.878020 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.878058 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-dir\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.879034 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.880186 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46caba5b-4a87-480a-ac56-437102a31802-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.880497 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.882859 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.883156 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.885320 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a53aac-c9f7-465c-821b-cd62aa893d13-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.885387 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4pnff"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.886239 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2lf59"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.886602 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/268883cf-a27e-4b69-bd41-18f0a35c3e6a-serving-cert\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.886876 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.887420 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.887569 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.888725 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.893817 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.894851 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.898110 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n9v6k"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.899228 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sd6hs"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.900292 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.903688 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.905189 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v476x"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.906867 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zd5lq"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.908752 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.910117 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.911967 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.912670 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2lf59"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.912773 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.913414 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.915230 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.917069 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.918103 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4pnff"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.919369 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.932907 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.952151 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.960087 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-policies\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.972245 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.977381 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.992536 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.012845 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.031748 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.052285 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.072200 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.104670 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.112889 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51800ff9-fe19-4a50-a272-be1de629ec82-config\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.114009 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.131532 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-stats-auth\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.134167 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.137395 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-service-ca-bundle\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.152930 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.160193 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-metrics-certs\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.172129 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.192898 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.212632 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.224606 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-default-certificate\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.232364 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.252456 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.273212 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.293572 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.312368 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.320965 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51800ff9-fe19-4a50-a272-be1de629ec82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.332025 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.343574 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/daa9599a-67b0-421e-8add-0656c0b98af2-metrics-tls\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.352370 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.372907 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.393441 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.417396 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.419258 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daa9599a-67b0-421e-8add-0656c0b98af2-trusted-ca\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.432452 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.452587 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.472303 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.493220 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.512293 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.533305 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.552128 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.572490 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.581824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.592707 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.597730 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-config\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.631845 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.653273 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.665399 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b72b54ef-6699-4091-b47d-f05f7c85adb2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mnzcq\" (UID: \"b72b54ef-6699-4091-b47d-f05f7c85adb2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.670969 4793 request.go:700] Waited for 1.002938905s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&limit=500&resourceVersion=0 Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.672750 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.693032 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.713623 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.732713 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.753529 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.773560 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.792600 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.833114 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs25n\" (UniqueName: \"kubernetes.io/projected/ea703d52-c081-418f-9343-61b68296314f-kube-api-access-qs25n\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.840132 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.846229 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl8wz\" (UniqueName: \"kubernetes.io/projected/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-kube-api-access-wl8wz\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.852512 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.872406 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.894396 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.912320 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.932750 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.957886 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.972765 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.031037 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mhtj\" (UniqueName: \"kubernetes.io/projected/7dbc78d6-c879-4284-89b6-169d359839bf-kube-api-access-9mhtj\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.031703 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.032912 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.034671 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.052099 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.068720 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.072170 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.079862 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cwwfj"] Jan 30 13:45:54 crc kubenswrapper[4793]: W0130 13:45:54.090567 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea703d52_c081_418f_9343_61b68296314f.slice/crio-9e2af840fa5b89adf95a0c581e72512f88e825192e31c92e8477d6a8c2e03dbc WatchSource:0}: Error finding container 9e2af840fa5b89adf95a0c581e72512f88e825192e31c92e8477d6a8c2e03dbc: Status 404 returned error can't find the container with id 9e2af840fa5b89adf95a0c581e72512f88e825192e31c92e8477d6a8c2e03dbc Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.091906 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.102660 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.112877 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.132743 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.153786 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.186923 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" event={"ID":"ea703d52-c081-418f-9343-61b68296314f","Type":"ContainerStarted","Data":"9e2af840fa5b89adf95a0c581e72512f88e825192e31c92e8477d6a8c2e03dbc"} Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.193857 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nhrw\" (UniqueName: \"kubernetes.io/projected/99444dfd-71c4-4d2d-a94a-cecc7a740423-kube-api-access-5nhrw\") pod \"dns-operator-744455d44c-ztcbh\" (UID: \"99444dfd-71c4-4d2d-a94a-cecc7a740423\") " pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.212207 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.219448 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.233366 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.243875 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9kbq\" (UniqueName: \"kubernetes.io/projected/6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2-kube-api-access-r9kbq\") pod \"downloads-7954f5f757-sd6hs\" (UID: \"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2\") " pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.252043 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.281139 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl"] Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.286245 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4z45\" (UniqueName: \"kubernetes.io/projected/3806824c-28d3-47d4-b33f-01d9ab1239b8-kube-api-access-n4z45\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.295428 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.313813 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.339158 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.352268 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.372945 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.377777 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-65rgb"] Jan 30 13:45:54 crc kubenswrapper[4793]: W0130 13:45:54.384229 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8aacb4a_f044_427a_b5ef_1d4126b98a6a.slice/crio-f71e939b5f36fce3e31b153d25f807eb7ec599b25bcf56541b647f3d1836e225 WatchSource:0}: Error finding container f71e939b5f36fce3e31b153d25f807eb7ec599b25bcf56541b647f3d1836e225: Status 404 returned error can't find the container with id f71e939b5f36fce3e31b153d25f807eb7ec599b25bcf56541b647f3d1836e225 Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.391719 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.407566 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.412760 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.433304 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.453315 4793 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.459974 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xfcvw"] Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.461643 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.472580 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: W0130 13:45:54.481683 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3401bbdc_090b_402b_bf7b_a4a823182946.slice/crio-f3c0936a73e62807c0b874758a1c2db154a809b4096905dd8d5cb0c8738657fe WatchSource:0}: Error finding container f3c0936a73e62807c0b874758a1c2db154a809b4096905dd8d5cb0c8738657fe: Status 404 returned error can't find the container with id f3c0936a73e62807c0b874758a1c2db154a809b4096905dd8d5cb0c8738657fe Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.492028 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.512096 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.532762 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.552533 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.576754 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.580022 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ztcbh"] Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.594030 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tk6n\" (UniqueName: \"kubernetes.io/projected/7fc1ca51-0362-4492-ba07-8c5413c39deb-kube-api-access-9tk6n\") pod \"cluster-samples-operator-665b6dd947-7x8ff\" (UID: \"7fc1ca51-0362-4492-ba07-8c5413c39deb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.607179 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82ft4\" (UniqueName: \"kubernetes.io/projected/4e62edf8-f827-4fa6-8b40-563c821707ae-kube-api-access-82ft4\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.630302 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mbjm\" (UniqueName: \"kubernetes.io/projected/b72b54ef-6699-4091-b47d-f05f7c85adb2-kube-api-access-2mbjm\") pod \"multus-admission-controller-857f4d67dd-mnzcq\" (UID: \"b72b54ef-6699-4091-b47d-f05f7c85adb2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.649690 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r94vd\" (UniqueName: \"kubernetes.io/projected/d2aa0043-dc77-41ca-a95f-2d119ed48053-kube-api-access-r94vd\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.653544 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.667872 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sd6hs"] Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.672375 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.690395 4793 request.go:700] Waited for 1.863407367s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.691786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w2cd\" (UniqueName: \"kubernetes.io/projected/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-kube-api-access-4w2cd\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:54 crc kubenswrapper[4793]: W0130 13:45:54.693837 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e9a73cf_3a15_4a72_9d5a_2cdd62318ea2.slice/crio-bdda1ecc421f8141df41616046f3d3f188f116ac7ed8f2994e348ec543fa07b3 WatchSource:0}: Error finding container bdda1ecc421f8141df41616046f3d3f188f116ac7ed8f2994e348ec543fa07b3: Status 404 returned error can't find the container with id bdda1ecc421f8141df41616046f3d3f188f116ac7ed8f2994e348ec543fa07b3 Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.710693 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnpxc\" (UniqueName: \"kubernetes.io/projected/46caba5b-4a87-480a-ac56-437102a31802-kube-api-access-lnpxc\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.711383 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.719314 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.726241 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.728279 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/daa9599a-67b0-421e-8add-0656c0b98af2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.768841 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2pcw\" (UniqueName: \"kubernetes.io/projected/daa9599a-67b0-421e-8add-0656c0b98af2-kube-api-access-p2pcw\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.786290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kpjf\" (UniqueName: \"kubernetes.io/projected/e2a53aac-c9f7-465c-821b-cd62aa893d13-kube-api-access-9kpjf\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.788908 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf6fx\" (UniqueName: \"kubernetes.io/projected/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-kube-api-access-jf6fx\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.792399 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.808586 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfgsg\" (UniqueName: \"kubernetes.io/projected/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-kube-api-access-wfgsg\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.810223 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.820531 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.824611 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.826214 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlvm8\" (UniqueName: \"kubernetes.io/projected/c44b9aaf-de3a-48a8-8760-5553255887ac-kube-api-access-jlvm8\") pod \"migrator-59844c95c7-q5442\" (UID: \"c44b9aaf-de3a-48a8-8760-5553255887ac\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.866805 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vhgb\" (UniqueName: \"kubernetes.io/projected/4a64abca-3318-4208-8edb-1474e0ba5f2f-kube-api-access-4vhgb\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.875753 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmq77\" (UniqueName: \"kubernetes.io/projected/268883cf-a27e-4b69-bd41-18f0a35c3e6a-kube-api-access-xmq77\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.889037 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckwgj\" (UniqueName: \"kubernetes.io/projected/4ce07df7-af19-4334-b704-818df47958a1-kube-api-access-ckwgj\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.907734 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd7922e2-3b17-4212-94b3-2405e20841ad-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.927490 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51800ff9-fe19-4a50-a272-be1de629ec82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.930880 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff"] Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.950958 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1faa169d-53de-456e-8f99-f93dc2772719-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.961594 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.973585 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.976238 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlclv\" (UniqueName: \"kubernetes.io/projected/cd7922e2-3b17-4212-94b3-2405e20841ad-kube-api-access-wlclv\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.996754 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.997065 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.007762 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.015660 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.027808 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.031918 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zrj8g"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.044264 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.046769 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.047167 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.055061 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.057785 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.060518 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.069444 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.074947 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.077473 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.097959 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.114706 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5l76j"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.149240 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174475 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/afa7929d-37a8-4fa2-9733-158cab1c40ec-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174502 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-bound-sa-token\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174521 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghn8d\" (UniqueName: \"kubernetes.io/projected/afa7929d-37a8-4fa2-9733-158cab1c40ec-kube-api-access-ghn8d\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174550 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/afa7929d-37a8-4fa2-9733-158cab1c40ec-images\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174569 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6e18cea-cac6-4eb8-b8de-2885fcf57497-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174588 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxzl2\" (UniqueName: \"kubernetes.io/projected/9fca2cfc-e4a0-42a0-9815-424987b55fd5-kube-api-access-pxzl2\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174617 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-tls\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174653 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-certificates\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174668 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa7929d-37a8-4fa2-9733-158cab1c40ec-config\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174693 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fca2cfc-e4a0-42a0-9815-424987b55fd5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174714 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-trusted-ca\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174733 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fca2cfc-e4a0-42a0-9815-424987b55fd5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174761 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174779 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6e18cea-cac6-4eb8-b8de-2885fcf57497-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174795 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg2l5\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-kube-api-access-xg2l5\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.175828 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:55.67581835 +0000 UTC m=+166.377166841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: W0130 13:45:55.196296 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3806824c_28d3_47d4_b33f_01d9ab1239b8.slice/crio-ebeef65cf977c550f990b47bea40a369de75d49849bacece5940da4022148b02 WatchSource:0}: Error finding container ebeef65cf977c550f990b47bea40a369de75d49849bacece5940da4022148b02: Status 404 returned error can't find the container with id ebeef65cf977c550f990b47bea40a369de75d49849bacece5940da4022148b02 Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.223600 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.233138 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-65rgb" event={"ID":"e8aacb4a-f044-427a-b5ef-1d4126b98a6a","Type":"ContainerStarted","Data":"35104949249c3b797524bbbce708846543e38271ca4497bb48cec0610fbb4e5d"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.233205 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-65rgb" event={"ID":"e8aacb4a-f044-427a-b5ef-1d4126b98a6a","Type":"ContainerStarted","Data":"f71e939b5f36fce3e31b153d25f807eb7ec599b25bcf56541b647f3d1836e225"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.233864 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.235965 4793 patch_prober.go:28] interesting pod/console-operator-58897d9998-65rgb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.236006 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-65rgb" podUID="e8aacb4a-f044-427a-b5ef-1d4126b98a6a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.237677 4793 generic.go:334] "Generic (PLEG): container finished" podID="ea703d52-c081-418f-9343-61b68296314f" containerID="b3d5fccd5ce91cfa10f3aa4efa67a5dac5276d91c9b96650348862da038b3fad" exitCode=0 Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.237724 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" event={"ID":"ea703d52-c081-418f-9343-61b68296314f","Type":"ContainerDied","Data":"b3d5fccd5ce91cfa10f3aa4efa67a5dac5276d91c9b96650348862da038b3fad"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.242360 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" event={"ID":"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f","Type":"ContainerStarted","Data":"75caf8f25739686e2addb206cbde5492323c176d0cd4b36001b212b0c13ae756"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.246138 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" event={"ID":"3401bbdc-090b-402b-bf7b-a4a823182946","Type":"ContainerStarted","Data":"e77637d9122e133a6d2b2a42071821a75959ea573de24e01ab364993d4834504"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.246171 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" event={"ID":"3401bbdc-090b-402b-bf7b-a4a823182946","Type":"ContainerStarted","Data":"f3c0936a73e62807c0b874758a1c2db154a809b4096905dd8d5cb0c8738657fe"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.251181 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" event={"ID":"7dbc78d6-c879-4284-89b6-169d359839bf","Type":"ContainerStarted","Data":"9fce52fd4df200cd47b1ec015ae5f6e141a21db87359d7fd523e3ede8826e2ec"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.251210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" event={"ID":"7dbc78d6-c879-4284-89b6-169d359839bf","Type":"ContainerStarted","Data":"029de3b1f28797b6cbbf4b7545deaf6781dd6b3401588287ec9fa2ad62c13962"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.251890 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.253789 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.253817 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.263946 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sd6hs" event={"ID":"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2","Type":"ContainerStarted","Data":"f99529531b1a090c1e9f4ecee92d599c59303bd9a673012fd1cacb5057890818"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.263993 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sd6hs" event={"ID":"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2","Type":"ContainerStarted","Data":"bdda1ecc421f8141df41616046f3d3f188f116ac7ed8f2994e348ec543fa07b3"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.265944 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" event={"ID":"99444dfd-71c4-4d2d-a94a-cecc7a740423","Type":"ContainerStarted","Data":"1bb3533f2f821097a35d4b358c1f72ed9ac789a3e4a473ad96c9b00830444be3"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.265965 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" event={"ID":"99444dfd-71c4-4d2d-a94a-cecc7a740423","Type":"ContainerStarted","Data":"58afd350517e81ce61a630548fc3831c772035b08a3aa070c55c46f08a0f8f91"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.277007 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.277314 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:55.777270355 +0000 UTC m=+166.478618846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282684 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg2l5\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-kube-api-access-xg2l5\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282751 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6db0dcc6-874c-40f9-a0b7-309149c78f48-config-volume\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282782 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff54l\" (UniqueName: \"kubernetes.io/projected/9932b998-297e-47a4-a005-ccfca0665793-kube-api-access-ff54l\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282807 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/26050dc1-aaba-45b6-8633-015f5e4261f0-metrics-tls\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282865 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjgtm\" (UniqueName: \"kubernetes.io/projected/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-kube-api-access-vjgtm\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282907 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wq7p\" (UniqueName: \"kubernetes.io/projected/ff810089-efad-424c-8537-f528803767c7-kube-api-access-2wq7p\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282956 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45cf2\" (UniqueName: \"kubernetes.io/projected/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-kube-api-access-45cf2\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283030 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f30f4833-f565-4225-a45a-02c0f592c37b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r8b5w\" (UID: \"f30f4833-f565-4225-a45a-02c0f592c37b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283077 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jbxz\" (UniqueName: \"kubernetes.io/projected/10c05bcf-ffb2-4175-b323-067804ea3391-kube-api-access-7jbxz\") pod \"control-plane-machine-set-operator-78cbb6b69f-vqxml\" (UID: \"10c05bcf-ffb2-4175-b323-067804ea3391\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283120 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/afa7929d-37a8-4fa2-9733-158cab1c40ec-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283163 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-bound-sa-token\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283226 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghn8d\" (UniqueName: \"kubernetes.io/projected/afa7929d-37a8-4fa2-9733-158cab1c40ec-kube-api-access-ghn8d\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283261 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-webhook-cert\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283357 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-plugins-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283402 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9jsh\" (UniqueName: \"kubernetes.io/projected/ee3323fa-00f7-45ee-8d54-040e40398b5a-kube-api-access-g9jsh\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283467 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-tmpfs\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283508 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee3323fa-00f7-45ee-8d54-040e40398b5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283533 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/afa7929d-37a8-4fa2-9733-158cab1c40ec-images\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283560 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283580 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283608 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6e18cea-cac6-4eb8-b8de-2885fcf57497-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxzl2\" (UniqueName: \"kubernetes.io/projected/9fca2cfc-e4a0-42a0-9815-424987b55fd5-kube-api-access-pxzl2\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283716 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ff810089-efad-424c-8537-f528803767c7-certs\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283751 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gj7j\" (UniqueName: \"kubernetes.io/projected/f30f4833-f565-4225-a45a-02c0f592c37b-kube-api-access-8gj7j\") pod \"package-server-manager-789f6589d5-r8b5w\" (UID: \"f30f4833-f565-4225-a45a-02c0f592c37b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283779 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-srv-cert\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283800 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-proxy-tls\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283856 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee3323fa-00f7-45ee-8d54-040e40398b5a-srv-cert\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283898 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-tls\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283962 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-csi-data-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284015 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6db0dcc6-874c-40f9-a0b7-309149c78f48-secret-volume\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284032 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-proxy-tls\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-certificates\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284122 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26050dc1-aaba-45b6-8633-015f5e4261f0-config-volume\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284153 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa7929d-37a8-4fa2-9733-158cab1c40ec-config\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284170 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-747rb\" (UniqueName: \"kubernetes.io/projected/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-kube-api-access-747rb\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284203 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-socket-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284233 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ff810089-efad-424c-8537-f528803767c7-node-bootstrap-token\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284251 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh6ft\" (UniqueName: \"kubernetes.io/projected/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-kube-api-access-sh6ft\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284282 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284299 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e495d5-6c64-4452-b05c-36e055a100b4-cert\") pod \"ingress-canary-4pnff\" (UID: \"58e495d5-6c64-4452-b05c-36e055a100b4\") " pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284344 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khqhk\" (UniqueName: \"kubernetes.io/projected/26050dc1-aaba-45b6-8633-015f5e4261f0-kube-api-access-khqhk\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284362 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfwd2\" (UniqueName: \"kubernetes.io/projected/58e495d5-6c64-4452-b05c-36e055a100b4-kube-api-access-nfwd2\") pod \"ingress-canary-4pnff\" (UID: \"58e495d5-6c64-4452-b05c-36e055a100b4\") " pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284387 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fca2cfc-e4a0-42a0-9815-424987b55fd5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284408 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6283f5-d30b-483e-8772-456b0109a14b-config\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284428 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284450 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6283f5-d30b-483e-8772-456b0109a14b-serving-cert\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284480 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-apiservice-cert\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284538 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn47d\" (UniqueName: \"kubernetes.io/projected/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-kube-api-access-tn47d\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284600 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-trusted-ca\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284638 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-mountpoint-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284659 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-images\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284690 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-registration-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284710 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s85cn\" (UniqueName: \"kubernetes.io/projected/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-kube-api-access-s85cn\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284759 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fca2cfc-e4a0-42a0-9815-424987b55fd5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284810 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9932b998-297e-47a4-a005-ccfca0665793-signing-cabundle\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284838 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qxpm\" (UniqueName: \"kubernetes.io/projected/6db0dcc6-874c-40f9-a0b7-309149c78f48-kube-api-access-2qxpm\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284868 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vznc\" (UniqueName: \"kubernetes.io/projected/8b6283f5-d30b-483e-8772-456b0109a14b-kube-api-access-5vznc\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284884 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9932b998-297e-47a4-a005-ccfca0665793-signing-key\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284904 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10c05bcf-ffb2-4175-b323-067804ea3391-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vqxml\" (UID: \"10c05bcf-ffb2-4175-b323-067804ea3391\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284940 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284983 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6e18cea-cac6-4eb8-b8de-2885fcf57497-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.285004 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.290953 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fca2cfc-e4a0-42a0-9815-424987b55fd5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.293005 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fca2cfc-e4a0-42a0-9815-424987b55fd5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.294200 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/afa7929d-37a8-4fa2-9733-158cab1c40ec-images\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.297279 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:55.79726132 +0000 UTC m=+166.498609891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.297749 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6e18cea-cac6-4eb8-b8de-2885fcf57497-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.299716 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/afa7929d-37a8-4fa2-9733-158cab1c40ec-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.301315 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-trusted-ca\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.301736 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-certificates\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.301786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa7929d-37a8-4fa2-9733-158cab1c40ec-config\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.302322 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6e18cea-cac6-4eb8-b8de-2885fcf57497-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.323015 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-tls\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.335459 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg2l5\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-kube-api-access-xg2l5\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.354077 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kknzc"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.354992 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxzl2\" (UniqueName: \"kubernetes.io/projected/9fca2cfc-e4a0-42a0-9815-424987b55fd5-kube-api-access-pxzl2\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.369319 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qsdzw"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.374602 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-bound-sa-token\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386515 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386767 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-socket-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386826 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ff810089-efad-424c-8537-f528803767c7-node-bootstrap-token\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386853 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh6ft\" (UniqueName: \"kubernetes.io/projected/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-kube-api-access-sh6ft\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386874 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386916 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e495d5-6c64-4452-b05c-36e055a100b4-cert\") pod \"ingress-canary-4pnff\" (UID: \"58e495d5-6c64-4452-b05c-36e055a100b4\") " pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386939 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khqhk\" (UniqueName: \"kubernetes.io/projected/26050dc1-aaba-45b6-8633-015f5e4261f0-kube-api-access-khqhk\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6283f5-d30b-483e-8772-456b0109a14b-config\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387001 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387022 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfwd2\" (UniqueName: \"kubernetes.io/projected/58e495d5-6c64-4452-b05c-36e055a100b4-kube-api-access-nfwd2\") pod \"ingress-canary-4pnff\" (UID: \"58e495d5-6c64-4452-b05c-36e055a100b4\") " pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387042 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6283f5-d30b-483e-8772-456b0109a14b-serving-cert\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387092 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-apiservice-cert\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387149 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn47d\" (UniqueName: \"kubernetes.io/projected/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-kube-api-access-tn47d\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387189 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-mountpoint-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387233 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-registration-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387254 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s85cn\" (UniqueName: \"kubernetes.io/projected/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-kube-api-access-s85cn\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387275 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-images\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387355 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9932b998-297e-47a4-a005-ccfca0665793-signing-cabundle\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387396 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qxpm\" (UniqueName: \"kubernetes.io/projected/6db0dcc6-874c-40f9-a0b7-309149c78f48-kube-api-access-2qxpm\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387425 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vznc\" (UniqueName: \"kubernetes.io/projected/8b6283f5-d30b-483e-8772-456b0109a14b-kube-api-access-5vznc\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387470 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9932b998-297e-47a4-a005-ccfca0665793-signing-key\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387497 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10c05bcf-ffb2-4175-b323-067804ea3391-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vqxml\" (UID: \"10c05bcf-ffb2-4175-b323-067804ea3391\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387554 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387579 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6db0dcc6-874c-40f9-a0b7-309149c78f48-config-volume\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387601 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff54l\" (UniqueName: \"kubernetes.io/projected/9932b998-297e-47a4-a005-ccfca0665793-kube-api-access-ff54l\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387640 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/26050dc1-aaba-45b6-8633-015f5e4261f0-metrics-tls\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387663 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjgtm\" (UniqueName: \"kubernetes.io/projected/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-kube-api-access-vjgtm\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387725 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wq7p\" (UniqueName: \"kubernetes.io/projected/ff810089-efad-424c-8537-f528803767c7-kube-api-access-2wq7p\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387762 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45cf2\" (UniqueName: \"kubernetes.io/projected/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-kube-api-access-45cf2\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387807 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f30f4833-f565-4225-a45a-02c0f592c37b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r8b5w\" (UID: \"f30f4833-f565-4225-a45a-02c0f592c37b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387967 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jbxz\" (UniqueName: \"kubernetes.io/projected/10c05bcf-ffb2-4175-b323-067804ea3391-kube-api-access-7jbxz\") pod \"control-plane-machine-set-operator-78cbb6b69f-vqxml\" (UID: \"10c05bcf-ffb2-4175-b323-067804ea3391\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-webhook-cert\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388151 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-plugins-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388239 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9jsh\" (UniqueName: \"kubernetes.io/projected/ee3323fa-00f7-45ee-8d54-040e40398b5a-kube-api-access-g9jsh\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388286 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-tmpfs\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388355 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee3323fa-00f7-45ee-8d54-040e40398b5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388402 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388425 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388521 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gj7j\" (UniqueName: \"kubernetes.io/projected/f30f4833-f565-4225-a45a-02c0f592c37b-kube-api-access-8gj7j\") pod \"package-server-manager-789f6589d5-r8b5w\" (UID: \"f30f4833-f565-4225-a45a-02c0f592c37b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388564 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-srv-cert\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388584 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ff810089-efad-424c-8537-f528803767c7-certs\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388639 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-proxy-tls\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388674 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee3323fa-00f7-45ee-8d54-040e40398b5a-srv-cert\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388755 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6db0dcc6-874c-40f9-a0b7-309149c78f48-secret-volume\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388798 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-csi-data-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388831 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26050dc1-aaba-45b6-8633-015f5e4261f0-config-volume\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388873 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-proxy-tls\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388897 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-747rb\" (UniqueName: \"kubernetes.io/projected/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-kube-api-access-747rb\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.390353 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-plugins-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.390424 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-mountpoint-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.390473 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-registration-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.391238 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-tmpfs\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.391254 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghn8d\" (UniqueName: \"kubernetes.io/projected/afa7929d-37a8-4fa2-9733-158cab1c40ec-kube-api-access-ghn8d\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.391507 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:55.891470585 +0000 UTC m=+166.592819116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.391679 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-socket-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.392365 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-images\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.404992 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.405195 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.405763 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9932b998-297e-47a4-a005-ccfca0665793-signing-cabundle\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.406187 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-webhook-cert\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.407287 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee3323fa-00f7-45ee-8d54-040e40398b5a-srv-cert\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.408950 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6283f5-d30b-483e-8772-456b0109a14b-config\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.409417 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.409407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-csi-data-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.410328 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.410603 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ff810089-efad-424c-8537-f528803767c7-node-bootstrap-token\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.412526 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee3323fa-00f7-45ee-8d54-040e40398b5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.412695 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-apiservice-cert\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.416557 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6db0dcc6-874c-40f9-a0b7-309149c78f48-config-volume\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.418626 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f30f4833-f565-4225-a45a-02c0f592c37b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r8b5w\" (UID: \"f30f4833-f565-4225-a45a-02c0f592c37b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.419000 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/26050dc1-aaba-45b6-8633-015f5e4261f0-metrics-tls\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.422655 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9932b998-297e-47a4-a005-ccfca0665793-signing-key\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.425493 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.425709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.426002 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10c05bcf-ffb2-4175-b323-067804ea3391-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vqxml\" (UID: \"10c05bcf-ffb2-4175-b323-067804ea3391\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.426760 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.426964 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ff810089-efad-424c-8537-f528803767c7-certs\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.432993 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-proxy-tls\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.434800 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-747rb\" (UniqueName: \"kubernetes.io/projected/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-kube-api-access-747rb\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.435184 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26050dc1-aaba-45b6-8633-015f5e4261f0-config-volume\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.440629 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-proxy-tls\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.442159 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-srv-cert\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.442443 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6db0dcc6-874c-40f9-a0b7-309149c78f48-secret-volume\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.444981 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e495d5-6c64-4452-b05c-36e055a100b4-cert\") pod \"ingress-canary-4pnff\" (UID: \"58e495d5-6c64-4452-b05c-36e055a100b4\") " pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.447361 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6283f5-d30b-483e-8772-456b0109a14b-serving-cert\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.454579 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn47d\" (UniqueName: \"kubernetes.io/projected/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-kube-api-access-tn47d\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.476788 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s85cn\" (UniqueName: \"kubernetes.io/projected/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-kube-api-access-s85cn\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.480377 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.481555 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.486421 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.489258 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v476x"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.489916 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.490210 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:55.990199148 +0000 UTC m=+166.691547639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.490581 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.499506 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.508193 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjgtm\" (UniqueName: \"kubernetes.io/projected/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-kube-api-access-vjgtm\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.511524 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9jsh\" (UniqueName: \"kubernetes.io/projected/ee3323fa-00f7-45ee-8d54-040e40398b5a-kube-api-access-g9jsh\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.518777 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mnzcq"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.537861 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wq7p\" (UniqueName: \"kubernetes.io/projected/ff810089-efad-424c-8537-f528803767c7-kube-api-access-2wq7p\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.564567 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45cf2\" (UniqueName: \"kubernetes.io/projected/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-kube-api-access-45cf2\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.570654 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh6ft\" (UniqueName: \"kubernetes.io/projected/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-kube-api-access-sh6ft\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.591086 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jbxz\" (UniqueName: \"kubernetes.io/projected/10c05bcf-ffb2-4175-b323-067804ea3391-kube-api-access-7jbxz\") pod \"control-plane-machine-set-operator-78cbb6b69f-vqxml\" (UID: \"10c05bcf-ffb2-4175-b323-067804ea3391\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.591515 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.591871 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.091854358 +0000 UTC m=+166.793202849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.600324 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: W0130 13:45:55.607108 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f6bee7_a66e_4cec_83d5_6c0796a73e22.slice/crio-24bd55b7779751400a79ee717b96ea544f012a65f6b30cdf0b0ec04c1bc00a8c WatchSource:0}: Error finding container 24bd55b7779751400a79ee717b96ea544f012a65f6b30cdf0b0ec04c1bc00a8c: Status 404 returned error can't find the container with id 24bd55b7779751400a79ee717b96ea544f012a65f6b30cdf0b0ec04c1bc00a8c Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.607317 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.615761 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.628237 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khqhk\" (UniqueName: \"kubernetes.io/projected/26050dc1-aaba-45b6-8633-015f5e4261f0-kube-api-access-khqhk\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.632358 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.660207 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gj7j\" (UniqueName: \"kubernetes.io/projected/f30f4833-f565-4225-a45a-02c0f592c37b-kube-api-access-8gj7j\") pod \"package-server-manager-789f6589d5-r8b5w\" (UID: \"f30f4833-f565-4225-a45a-02c0f592c37b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.697806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff54l\" (UniqueName: \"kubernetes.io/projected/9932b998-297e-47a4-a005-ccfca0665793-kube-api-access-ff54l\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.698457 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.698993 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.198977022 +0000 UTC m=+166.900325523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.703970 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.706892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfwd2\" (UniqueName: \"kubernetes.io/projected/58e495d5-6c64-4452-b05c-36e055a100b4-kube-api-access-nfwd2\") pod \"ingress-canary-4pnff\" (UID: \"58e495d5-6c64-4452-b05c-36e055a100b4\") " pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.720486 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.744723 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.746255 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.747464 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.750519 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vznc\" (UniqueName: \"kubernetes.io/projected/8b6283f5-d30b-483e-8772-456b0109a14b-kube-api-access-5vznc\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.752358 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qxpm\" (UniqueName: \"kubernetes.io/projected/6db0dcc6-874c-40f9-a0b7-309149c78f48-kube-api-access-2qxpm\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.757720 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.760369 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2mcj"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.766219 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.800150 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.800769 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.300752775 +0000 UTC m=+167.002101256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.803413 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.828365 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: W0130 13:45:55.870813 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2a53aac_c9f7_465c_821b_cd62aa893d13.slice/crio-ceaca64468097ed06b34d5285968da73a0c12ecd8f2de0d6a9b136046beec28e WatchSource:0}: Error finding container ceaca64468097ed06b34d5285968da73a0c12ecd8f2de0d6a9b136046beec28e: Status 404 returned error can't find the container with id ceaca64468097ed06b34d5285968da73a0c12ecd8f2de0d6a9b136046beec28e Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.901547 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.902244 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.402229681 +0000 UTC m=+167.103578172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.926859 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:55.998465 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.002704 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.002975 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.502960807 +0000 UTC m=+167.204309298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.050403 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.070645 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.098698 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-899ps"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.104287 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.104637 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.604623647 +0000 UTC m=+167.305972138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.115207 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-56g7n"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.221423 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.221758 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.721741934 +0000 UTC m=+167.423090425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.223596 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gsr67"] Jan 30 13:45:56 crc kubenswrapper[4793]: W0130 13:45:56.271813 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ce07df7_af19_4334_b704_818df47958a1.slice/crio-40d821df48bde25c13419212f33e6d45e1f09a2976143a476e372ddcb7de8977 WatchSource:0}: Error finding container 40d821df48bde25c13419212f33e6d45e1f09a2976143a476e372ddcb7de8977: Status 404 returned error can't find the container with id 40d821df48bde25c13419212f33e6d45e1f09a2976143a476e372ddcb7de8977 Jan 30 13:45:56 crc kubenswrapper[4793]: W0130 13:45:56.290543 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafa7929d_37a8_4fa2_9733_158cab1c40ec.slice/crio-0675174f602274cec64270e535350cddda8ab1136c88dae78a81e3e89a4f7d9f WatchSource:0}: Error finding container 0675174f602274cec64270e535350cddda8ab1136c88dae78a81e3e89a4f7d9f: Status 404 returned error can't find the container with id 0675174f602274cec64270e535350cddda8ab1136c88dae78a81e3e89a4f7d9f Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.292208 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" event={"ID":"daa9599a-67b0-421e-8add-0656c0b98af2","Type":"ContainerStarted","Data":"49a180b06b2102f1f0bdd289dc2e1b6c881d599af48ea9adf0dbf94bab3b6d0e"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.293107 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" event={"ID":"3806824c-28d3-47d4-b33f-01d9ab1239b8","Type":"ContainerStarted","Data":"ebeef65cf977c550f990b47bea40a369de75d49849bacece5940da4022148b02"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.304256 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" event={"ID":"e2a53aac-c9f7-465c-821b-cd62aa893d13","Type":"ContainerStarted","Data":"ceaca64468097ed06b34d5285968da73a0c12ecd8f2de0d6a9b136046beec28e"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.319379 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2lv2p" event={"ID":"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc","Type":"ContainerStarted","Data":"9a3a1e27832473618b66e6b2c1055e6a48e70a3eca61ad8b9f60c802f1d3f22a"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.323956 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.325361 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n"] Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.325962 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.825944931 +0000 UTC m=+167.527293422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.328169 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" event={"ID":"25ebc563-7e8a-4d8f-ace8-2d6c767816cf","Type":"ContainerStarted","Data":"ce7275dc9b0505faf357fda3a2560f041a27b41cc92b3214055ec96cf24dcc9c"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.329865 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" event={"ID":"c44b9aaf-de3a-48a8-8760-5553255887ac","Type":"ContainerStarted","Data":"9cede41913997b56f9e43a0dc2bab8c620ba35a3fb3110774d665a4cb117d065"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.332836 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" event={"ID":"cd7922e2-3b17-4212-94b3-2405e20841ad","Type":"ContainerStarted","Data":"09db831e86d1c450c70165a2b7437425ff325654e30625f6159cd607dbf8b13a"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.352344 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" event={"ID":"7fc1ca51-0362-4492-ba07-8c5413c39deb","Type":"ContainerStarted","Data":"6d0752410ba98c2bc2f1a92bea73229e89fabbad72bdd349cf6974dd56b8c7a1"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.352388 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" event={"ID":"7fc1ca51-0362-4492-ba07-8c5413c39deb","Type":"ContainerStarted","Data":"6faa549e755518bfd5dec01dc6e80a76a8ba8e2e393bcad75ee67b04203d8b8a"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.389267 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" event={"ID":"4e62edf8-f827-4fa6-8b40-563c821707ae","Type":"ContainerStarted","Data":"4e41b1a1f4f457fc0474caf8e5ca919e41d2a622c6a06709a5f2df3908f9d18e"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.438975 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.439424 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.939399371 +0000 UTC m=+167.640747872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.461820 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" event={"ID":"b72b54ef-6699-4091-b47d-f05f7c85adb2","Type":"ContainerStarted","Data":"52a3743f45ced1808d08a2400b6b73d60ac30fc4f23792f8bdb542aa51781cf3"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.479430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kknzc" event={"ID":"69c74b2a-9812-42cf-90b7-b431e2b5f5cf","Type":"ContainerStarted","Data":"333d1fe50b85de201d8359b376659ea922dde6cd7dc921f7d1df2397e061732e"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.491986 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.504431 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" event={"ID":"4a64abca-3318-4208-8edb-1474e0ba5f2f","Type":"ContainerStarted","Data":"0e39fca869bb577560ccf5c5e0fd7294441d98f691e7a0b7c896fff632efcbeb"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.539131 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" event={"ID":"46caba5b-4a87-480a-ac56-437102a31802","Type":"ContainerStarted","Data":"aacb136b6e0299cc36715a06c8bd3491ac3bb3d3c5b7e39583453f7fc41f4291"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.539180 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" event={"ID":"46caba5b-4a87-480a-ac56-437102a31802","Type":"ContainerStarted","Data":"9645e79daceb9f44d806c214d1518c565f7dd080ef9ce89c8b3afaea21bee0f2"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.540332 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.540692 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.040679512 +0000 UTC m=+167.742028003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.547505 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" event={"ID":"d2aa0043-dc77-41ca-a95f-2d119ed48053","Type":"ContainerStarted","Data":"7394f9ee0cd656a9ab0c003174a7397ecfbee9a1cc9b73ba9a34857dbcd6b515"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.554407 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" event={"ID":"1faa169d-53de-456e-8f99-f93dc2772719","Type":"ContainerStarted","Data":"29869ec4f3416801c9952e4cc002e2be8b2ae1a57d0c81beaf18a751ddccf77f"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.591497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" event={"ID":"268883cf-a27e-4b69-bd41-18f0a35c3e6a","Type":"ContainerStarted","Data":"86ef773c0816c089c75665928f1abef5c6f766f515abfa5bb1d78513d4527722"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.641217 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.643431 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.14341484 +0000 UTC m=+167.844763331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.686616 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" event={"ID":"d3f6bee7-a66e-4cec-83d5-6c0796a73e22","Type":"ContainerStarted","Data":"24bd55b7779751400a79ee717b96ea544f012a65f6b30cdf0b0ec04c1bc00a8c"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.687958 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.688027 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.689814 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.689917 4793 patch_prober.go:28] interesting pod/console-operator-58897d9998-65rgb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.689952 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-65rgb" podUID="e8aacb4a-f044-427a-b5ef-1d4126b98a6a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.693248 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.696397 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.696450 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.742810 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.743164 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.24315411 +0000 UTC m=+167.944502601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.749127 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.809718 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.810985 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" podStartSLOduration=144.810975962 podStartE2EDuration="2m24.810975962s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:56.809979285 +0000 UTC m=+167.511327786" watchObservedRunningTime="2026-01-30 13:45:56.810975962 +0000 UTC m=+167.512324453" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.844218 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.846680 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.346658309 +0000 UTC m=+168.048006810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.895912 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.947359 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.947736 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.447724813 +0000 UTC m=+168.149073304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.063564 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.063965 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.563950496 +0000 UTC m=+168.265298987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.164961 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.165345 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.665328439 +0000 UTC m=+168.366676930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.182919 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.262916 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podStartSLOduration=144.262896092 podStartE2EDuration="2m24.262896092s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.247459896 +0000 UTC m=+167.948808387" watchObservedRunningTime="2026-01-30 13:45:57.262896092 +0000 UTC m=+167.964244583" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.265942 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.266011 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.765991504 +0000 UTC m=+168.467340005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.270826 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.272523 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.772508944 +0000 UTC m=+168.473857435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.328925 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-65rgb" podStartSLOduration=145.328907606 podStartE2EDuration="2m25.328907606s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.296755142 +0000 UTC m=+167.998103633" watchObservedRunningTime="2026-01-30 13:45:57.328907606 +0000 UTC m=+168.030256097" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.329041 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-sd6hs" podStartSLOduration=145.32903664 podStartE2EDuration="2m25.32903664s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.324178652 +0000 UTC m=+168.025527143" watchObservedRunningTime="2026-01-30 13:45:57.32903664 +0000 UTC m=+168.030385131" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.345180 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.372131 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.372579 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.872559123 +0000 UTC m=+168.573907614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.393684 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2lf59"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.426604 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n9v6k"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.473800 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.474139 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.97412577 +0000 UTC m=+168.675474261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.482439 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zd5lq"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.488941 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.570020 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4pnff"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.574482 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.574938 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.074916758 +0000 UTC m=+168.776265269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.676023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.676723 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.176695771 +0000 UTC m=+168.878044262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.695522 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" event={"ID":"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8","Type":"ContainerStarted","Data":"576470021ef659f30c7c3a2539e82fa8bd5c5b14ba15049a0ef55de4b5c75eb5"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.697430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" event={"ID":"f30f4833-f565-4225-a45a-02c0f592c37b","Type":"ContainerStarted","Data":"17e76deb92370243c06f7980b3c6816976961fdf27c9e6f1f2e65688869856a1"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.707656 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" event={"ID":"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f","Type":"ContainerStarted","Data":"09fa684b0d4ebc9391c068ae0df11f135365b0d6393d4dde12538b47a1507b7c"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.712433 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4pnff" event={"ID":"58e495d5-6c64-4452-b05c-36e055a100b4","Type":"ContainerStarted","Data":"3b2c6d50949137403ba9a8c10686eb0bfe0bd31f7cd7e10f0ba100ae385a864c"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.734266 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" event={"ID":"51800ff9-fe19-4a50-a272-be1de629ec82","Type":"ContainerStarted","Data":"9aa26c88fb9b9122494199b4740e048147441c39bd3ab54fbd6e660e38b23848"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.747016 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" event={"ID":"4ce07df7-af19-4334-b704-818df47958a1","Type":"ContainerStarted","Data":"40d821df48bde25c13419212f33e6d45e1f09a2976143a476e372ddcb7de8977"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.761778 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" event={"ID":"afa7929d-37a8-4fa2-9733-158cab1c40ec","Type":"ContainerStarted","Data":"0675174f602274cec64270e535350cddda8ab1136c88dae78a81e3e89a4f7d9f"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.766516 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerStarted","Data":"97c187117ac894b4f40744eaace0837c1dade5f185e1a06955e03936c650d6b8"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.775949 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-988dg" event={"ID":"ff810089-efad-424c-8537-f528803767c7","Type":"ContainerStarted","Data":"6aa789dccf5f4881b36dec0232e1b855b2419d40560b793aa9ac888036acc963"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.777167 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.777491 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.277476619 +0000 UTC m=+168.978825110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.780295 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" event={"ID":"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48","Type":"ContainerStarted","Data":"cff77a10780a1d452309df890646561da8cd096bed583158717ae7bce4c6c9da"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.782842 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" event={"ID":"cd7922e2-3b17-4212-94b3-2405e20841ad","Type":"ContainerStarted","Data":"b429809c4589815fb5f49b2c0edebebb65aa2ef40f8908286328904b0e16c6a2"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.804025 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2lf59" event={"ID":"26050dc1-aaba-45b6-8633-015f5e4261f0","Type":"ContainerStarted","Data":"6f005ffde2411968faec1332790f25d7456f670992f069a01455efacb21f1c00"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.809943 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" podStartSLOduration=145.809919851 podStartE2EDuration="2m25.809919851s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.807039425 +0000 UTC m=+168.508387916" watchObservedRunningTime="2026-01-30 13:45:57.809919851 +0000 UTC m=+168.511268342" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.817425 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" event={"ID":"99444dfd-71c4-4d2d-a94a-cecc7a740423","Type":"ContainerStarted","Data":"537561bb010e6c29f93a468442e04f859e3635bd4e19d86b7fb14a93a6631955"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.860241 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" event={"ID":"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e","Type":"ContainerStarted","Data":"9f2e734e355637ec91981730e25d72b3875fcf74dfe3193d7d89e38ad49704e9"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.865450 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kknzc" event={"ID":"69c74b2a-9812-42cf-90b7-b431e2b5f5cf","Type":"ContainerStarted","Data":"b72e6d29d1b411597eb5d49883f3b670ed4875b2819be1937cc8b9bc5e0bb53d"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.874695 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" podStartSLOduration=145.868998102 podStartE2EDuration="2m25.868998102s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.868545201 +0000 UTC m=+168.569893692" watchObservedRunningTime="2026-01-30 13:45:57.868998102 +0000 UTC m=+168.570346593" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.878802 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" event={"ID":"3401bbdc-090b-402b-bf7b-a4a823182946","Type":"ContainerStarted","Data":"310a18f020d53e38a65bd5e52c8e9b754a180dbefbf488b0becb0c8fde24d7f7"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.880390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.881360 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.381317976 +0000 UTC m=+169.082666467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.890423 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" event={"ID":"ee3323fa-00f7-45ee-8d54-040e40398b5a","Type":"ContainerStarted","Data":"35d4b95595df385b4efd1e4ea98b44dea785181c369c968e66b34c3aa27fe080"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.898011 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" event={"ID":"8b6283f5-d30b-483e-8772-456b0109a14b","Type":"ContainerStarted","Data":"c7dfa153d75591386ab9ed60f87fdaeb19e42906dab9c09ea14bdec8f6d8578d"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.910451 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" event={"ID":"4e62edf8-f827-4fa6-8b40-563c821707ae","Type":"ContainerStarted","Data":"39dc0ea700dee749040077e6ae12d95b42f7940a721f04248f2b017c10a9072c"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.923531 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" event={"ID":"9932b998-297e-47a4-a005-ccfca0665793","Type":"ContainerStarted","Data":"a91d2292a307c8b48c622d4b089b8d52f7036cf5e527afa26facd534bcae767d"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.924520 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" event={"ID":"e2a53aac-c9f7-465c-821b-cd62aa893d13","Type":"ContainerStarted","Data":"c5e1c5268fdbd7f5b0565de84492659091599640e7128c63aebc0d9f546c8f2d"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.925890 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" event={"ID":"b72b54ef-6699-4091-b47d-f05f7c85adb2","Type":"ContainerStarted","Data":"045f03756eb7708f2de161fc2f810472beb65668a9ac2e931f843c27c0643ba0"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.926504 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" event={"ID":"9fca2cfc-e4a0-42a0-9815-424987b55fd5","Type":"ContainerStarted","Data":"a8ff1c2296915927c32523d7c5c12e80ac30c681bd831b6e8585353b74330057"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.928022 4793 generic.go:334] "Generic (PLEG): container finished" podID="d2aa0043-dc77-41ca-a95f-2d119ed48053" containerID="175d3fd8eab5742391ef64df6fe143201ebc2c0816979ab91adc7a4c8925613f" exitCode=0 Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.928113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" event={"ID":"d2aa0043-dc77-41ca-a95f-2d119ed48053","Type":"ContainerDied","Data":"175d3fd8eab5742391ef64df6fe143201ebc2c0816979ab91adc7a4c8925613f"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.939211 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-kknzc" podStartSLOduration=145.939187086 podStartE2EDuration="2m25.939187086s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.938623472 +0000 UTC m=+168.639971973" watchObservedRunningTime="2026-01-30 13:45:57.939187086 +0000 UTC m=+168.640535577" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.965552 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" event={"ID":"268883cf-a27e-4b69-bd41-18f0a35c3e6a","Type":"ContainerStarted","Data":"d19f43efe0461581ea609f879abb2a31d725dd71966c84254d6bb05f0e18ea46"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.966142 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.968321 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.968361 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.970776 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xfcvw" podStartSLOduration=145.970760426 podStartE2EDuration="2m25.970760426s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.968745363 +0000 UTC m=+168.670093844" watchObservedRunningTime="2026-01-30 13:45:57.970760426 +0000 UTC m=+168.672108917" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.981404 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.983300 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.483282755 +0000 UTC m=+169.184631246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.998587 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2lv2p" event={"ID":"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc","Type":"ContainerStarted","Data":"a1bf3ad39f1b83e609823551975eb328f953eab1151ca8aadf29efd0d688a8d7"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.008774 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" event={"ID":"10c05bcf-ffb2-4175-b323-067804ea3391","Type":"ContainerStarted","Data":"2756eee741a154fa1aa7b08871d9983b24c6902d02d5329f07b41386b8b427b1"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.009790 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" event={"ID":"6db0dcc6-874c-40f9-a0b7-309149c78f48","Type":"ContainerStarted","Data":"02184320f6531b0c82ba4d167218eef7190463e44618fd9bd7006fada9858678"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.010995 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" event={"ID":"ea703d52-c081-418f-9343-61b68296314f","Type":"ContainerStarted","Data":"d4d4fa8a5717a04d957f305331300580ade8f686e881920c35d0ae4b21426604"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.020186 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" event={"ID":"daa9599a-67b0-421e-8add-0656c0b98af2","Type":"ContainerStarted","Data":"f238185a8ee70f5ee654989191ca0e853395468d89f894a128f4d0f06cb3e963"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.046003 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" podStartSLOduration=146.045986312 podStartE2EDuration="2m26.045986312s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:58.044593395 +0000 UTC m=+168.745941896" watchObservedRunningTime="2026-01-30 13:45:58.045986312 +0000 UTC m=+168.747334803" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.050058 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" event={"ID":"3806824c-28d3-47d4-b33f-01d9ab1239b8","Type":"ContainerStarted","Data":"5cde59f3f8f5aff2e52f56accf54a4a6faf23873b2d00104e67896a934cc7c4f"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.070131 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" event={"ID":"ce6b8f06-a708-4fdf-bbf3-47648cd005ea","Type":"ContainerStarted","Data":"7c75ea3fcddb215096ec65a3a642aacfc024658b46ed2c7bf0fb49de2795c068"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.071193 4793 patch_prober.go:28] interesting pod/console-operator-58897d9998-65rgb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.071236 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-65rgb" podUID="e8aacb4a-f044-427a-b5ef-1d4126b98a6a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.071553 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.071613 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.080448 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.090474 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.093114 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" podStartSLOduration=146.093099259 podStartE2EDuration="2m26.093099259s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:58.069438717 +0000 UTC m=+168.770787218" watchObservedRunningTime="2026-01-30 13:45:58.093099259 +0000 UTC m=+168.794447750" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.093908 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.096960 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.59694758 +0000 UTC m=+169.298296061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.097196 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.097319 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.121157 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podStartSLOduration=146.121140745 podStartE2EDuration="2m26.121140745s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:58.093034847 +0000 UTC m=+168.794383338" watchObservedRunningTime="2026-01-30 13:45:58.121140745 +0000 UTC m=+168.822489236" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.122642 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" podStartSLOduration=146.122636965 podStartE2EDuration="2m26.122636965s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:58.119814181 +0000 UTC m=+168.821162692" watchObservedRunningTime="2026-01-30 13:45:58.122636965 +0000 UTC m=+168.823985456" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.196296 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.198147 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.698128458 +0000 UTC m=+169.399476949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.236539 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2lv2p" podStartSLOduration=146.236524947 podStartE2EDuration="2m26.236524947s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:58.184014697 +0000 UTC m=+168.885363198" watchObservedRunningTime="2026-01-30 13:45:58.236524947 +0000 UTC m=+168.937873438" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.299029 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.299353 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.799338127 +0000 UTC m=+169.500686618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.404133 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.404499 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.904483689 +0000 UTC m=+169.605832180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.521708 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.522447 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.022432917 +0000 UTC m=+169.723781418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.623285 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.623610 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.123596974 +0000 UTC m=+169.824945455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.725789 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.726218 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.226196458 +0000 UTC m=+169.927544989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.827501 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.828240 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.328223459 +0000 UTC m=+170.029571950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.930702 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.931685 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.431665836 +0000 UTC m=+170.133014327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.032299 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.032640 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.532625488 +0000 UTC m=+170.233973979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.088538 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.088593 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.118482 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" event={"ID":"ce6b8f06-a708-4fdf-bbf3-47648cd005ea","Type":"ContainerStarted","Data":"07fe6e904f27b28fb11aac43945b2e946f813198fee541317eebcee351f6722f"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.134237 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.134848 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.634830432 +0000 UTC m=+170.336178923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.136251 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" event={"ID":"1faa169d-53de-456e-8f99-f93dc2772719","Type":"ContainerStarted","Data":"6eb4c7e76e77ed698549785ad31f9e89a2e40102afa15dc6648251bafbbd21f1"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.157497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" event={"ID":"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8","Type":"ContainerStarted","Data":"8201c1db636a976dafb517701da07b041385f89f0e9b3dfc309184a4b9d1d815"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.158546 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.159808 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" podStartSLOduration=146.159788028 podStartE2EDuration="2m26.159788028s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.156006278 +0000 UTC m=+169.857354799" watchObservedRunningTime="2026-01-30 13:45:59.159788028 +0000 UTC m=+169.861136539" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.161690 4793 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nb75n container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.163345 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" podUID="1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.168912 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" event={"ID":"8b6283f5-d30b-483e-8772-456b0109a14b","Type":"ContainerStarted","Data":"22de577a997dc4844e6f170b2bc451ebde5e16c1bda76d2b35fe98cc02a61e0f"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.194200 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" podStartSLOduration=146.194183822 podStartE2EDuration="2m26.194183822s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.191397489 +0000 UTC m=+169.892745980" watchObservedRunningTime="2026-01-30 13:45:59.194183822 +0000 UTC m=+169.895532313" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.196103 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" event={"ID":"c44b9aaf-de3a-48a8-8760-5553255887ac","Type":"ContainerStarted","Data":"edffb6218239025134a566d8338344713613e6e23f8b81031e4c34df8a9e9144"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.202260 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" event={"ID":"9932b998-297e-47a4-a005-ccfca0665793","Type":"ContainerStarted","Data":"6d4860beb4109c5a6235b4f3634ee65d2b53e79d766314f0fe423f9bdaa43dbc"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.204257 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" event={"ID":"7fc1ca51-0362-4492-ba07-8c5413c39deb","Type":"ContainerStarted","Data":"8d47d4bd3977502b09188b17a29126fa14b79b89d25b3bf5c619b27bbbdc4a04"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.205854 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4pnff" event={"ID":"58e495d5-6c64-4452-b05c-36e055a100b4","Type":"ContainerStarted","Data":"2f2faca3b6d19a4a83729a3b35d6dba8587348ad306bb8ddeadb8ea41b2d1c74"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.211418 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" event={"ID":"9fca2cfc-e4a0-42a0-9815-424987b55fd5","Type":"ContainerStarted","Data":"3a4833ce89933f7d1a33fedee1d652f75f0d97dbe2f3a37cf91fb091f62b0575"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.215987 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" event={"ID":"51800ff9-fe19-4a50-a272-be1de629ec82","Type":"ContainerStarted","Data":"2af8a796d982c2ee4c0edfc0b738330c2abdc9983916db7150a8f44d58fec00b"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.217968 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" event={"ID":"4a64abca-3318-4208-8edb-1474e0ba5f2f","Type":"ContainerStarted","Data":"2275a87f84b4ec94a142778010cf54bfc2388e423117a117dbf57f37d1a87794"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.218530 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.222781 4793 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s2mcj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.223159 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.225282 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" event={"ID":"afa7929d-37a8-4fa2-9733-158cab1c40ec","Type":"ContainerStarted","Data":"7879e71671d6f7252902a061b12a530b8ba33625603b0d4d8130f0fc3d40f270"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.227955 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-988dg" event={"ID":"ff810089-efad-424c-8537-f528803767c7","Type":"ContainerStarted","Data":"243dc25471aa047bf84126b95ea2f0a80cc4fca3dfcd4b7891394dd7596496b5"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.229552 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" podStartSLOduration=146.22954087 podStartE2EDuration="2m26.22954087s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.227477237 +0000 UTC m=+169.928825728" watchObservedRunningTime="2026-01-30 13:45:59.22954087 +0000 UTC m=+169.930889361" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.231710 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" event={"ID":"25ebc563-7e8a-4d8f-ace8-2d6c767816cf","Type":"ContainerStarted","Data":"463d7be559b645fc1cbfa75616a507af2f5fbef950f5efe4351b0b0273f5de2e"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.232506 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.233984 4793 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fbdzm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.234109 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" podUID="25ebc563-7e8a-4d8f-ace8-2d6c767816cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.235763 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.235989 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.73596671 +0000 UTC m=+170.437315261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.242913 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" event={"ID":"d3f6bee7-a66e-4cec-83d5-6c0796a73e22","Type":"ContainerStarted","Data":"49031d31378cbe0f02a6049ef5b9da994544ea93af31a8215dd9e6d3728bf4b9"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.260401 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerStarted","Data":"e83f7454337f430495faf606622a60c225aa40f81a53c0c6d2b0f496da168c9b"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.262163 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.265637 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.265710 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.279486 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" event={"ID":"f30f4833-f565-4225-a45a-02c0f592c37b","Type":"ContainerStarted","Data":"e8b9bdf9e6b38b1be771498296bf4f5756c61337e164748d997dc6c85949085d"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.286433 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" podStartSLOduration=147.286415535 podStartE2EDuration="2m27.286415535s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.262031054 +0000 UTC m=+169.963379555" watchObservedRunningTime="2026-01-30 13:45:59.286415535 +0000 UTC m=+169.987764026" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.287658 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" podStartSLOduration=148.287651346 podStartE2EDuration="2m28.287651346s" podCreationTimestamp="2026-01-30 13:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.287442261 +0000 UTC m=+169.988790762" watchObservedRunningTime="2026-01-30 13:45:59.287651346 +0000 UTC m=+169.988999837" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.305367 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" event={"ID":"10c05bcf-ffb2-4175-b323-067804ea3391","Type":"ContainerStarted","Data":"212528f818185ed34c08690d1751b643e849af81e53c1991d8ea6a0b53521695"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.316221 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" event={"ID":"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f","Type":"ContainerStarted","Data":"0da33b576395a991ab5923fecbb1f6438080aff6f085708f99e9123cfd200b10"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.320033 4793 generic.go:334] "Generic (PLEG): container finished" podID="4ce07df7-af19-4334-b704-818df47958a1" containerID="faf76e32a21b3409d88a29e026fbc6a735f3e18018e820a84116c9565adccbb0" exitCode=0 Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.320113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" event={"ID":"4ce07df7-af19-4334-b704-818df47958a1","Type":"ContainerDied","Data":"faf76e32a21b3409d88a29e026fbc6a735f3e18018e820a84116c9565adccbb0"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.332857 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" podStartSLOduration=146.332838874 podStartE2EDuration="2m26.332838874s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.331216301 +0000 UTC m=+170.032564792" watchObservedRunningTime="2026-01-30 13:45:59.332838874 +0000 UTC m=+170.034187365" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.337922 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.340336 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.84032403 +0000 UTC m=+170.541672521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.347820 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" event={"ID":"6db0dcc6-874c-40f9-a0b7-309149c78f48","Type":"ContainerStarted","Data":"0003a0f96b0d450dcabcfae0a5907ebc6be8013da3e854ca4f0bce212cb173a6"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.371692 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" podStartSLOduration=146.371668923 podStartE2EDuration="2m26.371668923s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.364873965 +0000 UTC m=+170.066222456" watchObservedRunningTime="2026-01-30 13:45:59.371668923 +0000 UTC m=+170.073017414" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.379932 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" event={"ID":"daa9599a-67b0-421e-8add-0656c0b98af2","Type":"ContainerStarted","Data":"bc009a25495fdc317d5944d28e57adf1be4a457a67969ccdec1e58e68e1cee5e"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.381226 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.388702 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.436859 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" podStartSLOduration=59.436840355 podStartE2EDuration="59.436840355s" podCreationTimestamp="2026-01-30 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.4362375 +0000 UTC m=+170.137586001" watchObservedRunningTime="2026-01-30 13:45:59.436840355 +0000 UTC m=+170.138188846" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.437185 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-988dg" podStartSLOduration=7.437180044 podStartE2EDuration="7.437180044s" podCreationTimestamp="2026-01-30 13:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.413579044 +0000 UTC m=+170.114927555" watchObservedRunningTime="2026-01-30 13:45:59.437180044 +0000 UTC m=+170.138528535" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.440561 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.441551 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.941533178 +0000 UTC m=+170.642881679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.525082 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" podStartSLOduration=146.525036062 podStartE2EDuration="2m26.525036062s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.515884581 +0000 UTC m=+170.217233082" watchObservedRunningTime="2026-01-30 13:45:59.525036062 +0000 UTC m=+170.226384553" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.542832 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.549326 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.049311069 +0000 UTC m=+170.750659660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.621118 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" podStartSLOduration=148.621098516 podStartE2EDuration="2m28.621098516s" podCreationTimestamp="2026-01-30 13:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.538346412 +0000 UTC m=+170.239694903" watchObservedRunningTime="2026-01-30 13:45:59.621098516 +0000 UTC m=+170.322447007" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.644834 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.645350 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.145330662 +0000 UTC m=+170.846679163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.649038 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" podStartSLOduration=146.649009969 podStartE2EDuration="2m26.649009969s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.622425671 +0000 UTC m=+170.323774152" watchObservedRunningTime="2026-01-30 13:45:59.649009969 +0000 UTC m=+170.350358460" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.674417 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podStartSLOduration=146.674397046 podStartE2EDuration="2m26.674397046s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.647838708 +0000 UTC m=+170.349187219" watchObservedRunningTime="2026-01-30 13:45:59.674397046 +0000 UTC m=+170.375745537" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.697302 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" podStartSLOduration=146.697284017 podStartE2EDuration="2m26.697284017s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.690658882 +0000 UTC m=+170.392007373" watchObservedRunningTime="2026-01-30 13:45:59.697284017 +0000 UTC m=+170.398632508" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.746471 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.746857 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.246842228 +0000 UTC m=+170.948190719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.847164 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.847407 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.347377059 +0000 UTC m=+171.048725550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.847575 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.847941 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.347927314 +0000 UTC m=+171.049275805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.980856 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.480835155 +0000 UTC m=+171.182183646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.980738 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.981547 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.981826 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.48181858 +0000 UTC m=+171.183167071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.083792 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.083902 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.583878951 +0000 UTC m=+171.285227442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.084141 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.084462 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.584454457 +0000 UTC m=+171.285802948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.086983 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:00 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:00 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:00 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.087016 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.184615 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.184783 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.684755781 +0000 UTC m=+171.386104272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.184842 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.185203 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.685196893 +0000 UTC m=+171.386545384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.285292 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.285518 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.785504227 +0000 UTC m=+171.486852718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.387669 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.387949 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.887937218 +0000 UTC m=+171.589285709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.396196 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2lf59" event={"ID":"26050dc1-aaba-45b6-8633-015f5e4261f0","Type":"ContainerStarted","Data":"bf70278bac45a386fe1332d03c028ceb08240eb41110f1dd19708c3139c46a90"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.396239 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2lf59" event={"ID":"26050dc1-aaba-45b6-8633-015f5e4261f0","Type":"ContainerStarted","Data":"67fdf5dc2fb3bd571c6367c39c42f40ffdbc089986cdee111a376b51c566d5a4"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.406113 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2lf59" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.406142 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.406153 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" event={"ID":"ee3323fa-00f7-45ee-8d54-040e40398b5a","Type":"ContainerStarted","Data":"d67051afae4644e435b7ff2207c4adb177e81535b02b0afe3e3f984e50a68a26"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.407332 4793 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mgv7t container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.407369 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" podUID="ee3323fa-00f7-45ee-8d54-040e40398b5a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.408193 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" event={"ID":"afa7929d-37a8-4fa2-9733-158cab1c40ec","Type":"ContainerStarted","Data":"141da19e21a5c753ba8dbfa39952543b5be8152c8b19f7b5d722d35200e4fb3d"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.412558 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" event={"ID":"f30f4833-f565-4225-a45a-02c0f592c37b","Type":"ContainerStarted","Data":"fb9c35226649b0559845588b7db26db8e2dfcd97a94fe1995440c905f68fd6cd"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.413010 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.415629 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" event={"ID":"c44b9aaf-de3a-48a8-8760-5553255887ac","Type":"ContainerStarted","Data":"cd86524cb8e49b0001d3388b960362a26af7a64df77f22768d781a2af3bc3421"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.417907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" event={"ID":"ea703d52-c081-418f-9343-61b68296314f","Type":"ContainerStarted","Data":"ad8067578dce7cb75b98ef59a545ba8ac0512e86c3d0bc878456ecd3ae97e490"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.421863 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" podStartSLOduration=148.421844519 podStartE2EDuration="2m28.421844519s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.729600175 +0000 UTC m=+170.430948676" watchObservedRunningTime="2026-01-30 13:46:00.421844519 +0000 UTC m=+171.123193010" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.425278 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" event={"ID":"ce6b8f06-a708-4fdf-bbf3-47648cd005ea","Type":"ContainerStarted","Data":"ab744e3dc89600cd7e56f10e41fd4271475ca3626c547af8be3cb2cc2ca56ad0"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.428558 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" event={"ID":"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48","Type":"ContainerStarted","Data":"66b4a425e930d35884c64c7b600375d9acb2045b1da8048e32f9c83e9f6faf4d"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.428604 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" event={"ID":"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48","Type":"ContainerStarted","Data":"7ddc831980fb643c0d8d74a3339b16e7db8f8a48dd90c6289eb6e488d030286c"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.433667 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" event={"ID":"b72b54ef-6699-4091-b47d-f05f7c85adb2","Type":"ContainerStarted","Data":"77108c01b247508afc341e1a035c80ee33fc3e6964bbff7ee5d8fd975c7d4292"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.435372 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" event={"ID":"d2aa0043-dc77-41ca-a95f-2d119ed48053","Type":"ContainerStarted","Data":"0e81f3d1b0cf33096ac537979ab91ae70d5104a4438f6f9123572c6a18252613"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.437960 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" event={"ID":"4ce07df7-af19-4334-b704-818df47958a1","Type":"ContainerStarted","Data":"cf1f848d7f84df7f56178e6ac1fa86f072a7607f4c9c8ddf92fecb353f675afb"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.437988 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.442139 4793 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fbdzm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.442176 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" podUID="25ebc563-7e8a-4d8f-ace8-2d6c767816cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.442188 4793 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s2mcj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.442235 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.442293 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.442342 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.443860 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" podStartSLOduration=147.443845447 podStartE2EDuration="2m27.443845447s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.442562193 +0000 UTC m=+171.143910684" watchObservedRunningTime="2026-01-30 13:46:00.443845447 +0000 UTC m=+171.145193938" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.444151 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.444179 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.444181 4793 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nb75n container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.444220 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" podUID="1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.444993 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2lf59" podStartSLOduration=8.444983017 podStartE2EDuration="8.444983017s" podCreationTimestamp="2026-01-30 13:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.420336499 +0000 UTC m=+171.121684990" watchObservedRunningTime="2026-01-30 13:46:00.444983017 +0000 UTC m=+171.146331508" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.466899 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" podStartSLOduration=147.466878512 podStartE2EDuration="2m27.466878512s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.466174053 +0000 UTC m=+171.167522554" watchObservedRunningTime="2026-01-30 13:46:00.466878512 +0000 UTC m=+171.168227003" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.490352 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" podStartSLOduration=148.490332877 podStartE2EDuration="2m28.490332877s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.487482403 +0000 UTC m=+171.188830894" watchObservedRunningTime="2026-01-30 13:46:00.490332877 +0000 UTC m=+171.191681368" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.491148 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.491215 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.991203991 +0000 UTC m=+171.692552482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.498568 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.502076 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.002062716 +0000 UTC m=+171.703411207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.506097 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" podStartSLOduration=147.506028391 podStartE2EDuration="2m27.506028391s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.501666476 +0000 UTC m=+171.203014997" watchObservedRunningTime="2026-01-30 13:46:00.506028391 +0000 UTC m=+171.207376882" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.570228 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" podStartSLOduration=147.570207636 podStartE2EDuration="2m27.570207636s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.568748837 +0000 UTC m=+171.270097328" watchObservedRunningTime="2026-01-30 13:46:00.570207636 +0000 UTC m=+171.271556127" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.600726 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.600954 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.100926632 +0000 UTC m=+171.802275133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.601271 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.602303 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.102294228 +0000 UTC m=+171.803642719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.637672 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" podStartSLOduration=147.637654018 podStartE2EDuration="2m27.637654018s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.636148118 +0000 UTC m=+171.337496630" watchObservedRunningTime="2026-01-30 13:46:00.637654018 +0000 UTC m=+171.339002509" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.639192 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" podStartSLOduration=147.639187628 podStartE2EDuration="2m27.639187628s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.606365795 +0000 UTC m=+171.307714286" watchObservedRunningTime="2026-01-30 13:46:00.639187628 +0000 UTC m=+171.340536119" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.671465 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" podStartSLOduration=147.671448325 podStartE2EDuration="2m27.671448325s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.669388851 +0000 UTC m=+171.370737352" watchObservedRunningTime="2026-01-30 13:46:00.671448325 +0000 UTC m=+171.372796816" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.699789 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4pnff" podStartSLOduration=8.699774849 podStartE2EDuration="8.699774849s" podCreationTimestamp="2026-01-30 13:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.698142387 +0000 UTC m=+171.399490898" watchObservedRunningTime="2026-01-30 13:46:00.699774849 +0000 UTC m=+171.401123340" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.708144 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.708636 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.208616441 +0000 UTC m=+171.909964932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.729718 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" podStartSLOduration=147.729696605 podStartE2EDuration="2m27.729696605s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.72835403 +0000 UTC m=+171.429702521" watchObservedRunningTime="2026-01-30 13:46:00.729696605 +0000 UTC m=+171.431045096" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.772239 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" podStartSLOduration=147.772220223 podStartE2EDuration="2m27.772220223s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.771589515 +0000 UTC m=+171.472938016" watchObservedRunningTime="2026-01-30 13:46:00.772220223 +0000 UTC m=+171.473568714" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.806102 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" podStartSLOduration=148.806083522 podStartE2EDuration="2m28.806083522s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.80486411 +0000 UTC m=+171.506212601" watchObservedRunningTime="2026-01-30 13:46:00.806083522 +0000 UTC m=+171.507432013" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.809789 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.810180 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.310163789 +0000 UTC m=+172.011512290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.910705 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.910909 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.410856174 +0000 UTC m=+172.112204675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.911038 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.911514 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.411503491 +0000 UTC m=+172.112851992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.011830 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.012288 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.512268307 +0000 UTC m=+172.213616798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.083416 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:01 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:01 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:01 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.083817 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.113500 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.113825 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.613811635 +0000 UTC m=+172.315160126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.214892 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.215327 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.715308601 +0000 UTC m=+172.416657092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.315927 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.316214 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.816202871 +0000 UTC m=+172.517551362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.417205 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.417385 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.917346368 +0000 UTC m=+172.618694859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.417553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.417922 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.917889992 +0000 UTC m=+172.619238493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.444293 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" event={"ID":"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e","Type":"ContainerStarted","Data":"f504e11157414eba7b106c750b8214ece0121d39cbd674056e6e2bd96e575025"} Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.445348 4793 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mgv7t container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.445397 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" podUID="ee3323fa-00f7-45ee-8d54-040e40398b5a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.446893 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.446960 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.458268 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.518458 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.519685 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.019671166 +0000 UTC m=+172.721019657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.620385 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.620861 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.120846224 +0000 UTC m=+172.822194715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.721036 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.721142 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.221121597 +0000 UTC m=+172.922470088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.721320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.721591 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.22158127 +0000 UTC m=+172.922929771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.822937 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.823077 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.323040004 +0000 UTC m=+173.024388495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.823482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.823764 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.323752703 +0000 UTC m=+173.025101194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.924781 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.924958 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.424933331 +0000 UTC m=+173.126281822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.925231 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.925508 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.425500305 +0000 UTC m=+173.126848796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.026593 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.026756 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.526721195 +0000 UTC m=+173.228069686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.026866 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.027257 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.527248438 +0000 UTC m=+173.228596929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.081020 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:02 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:02 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:02 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.081114 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.128171 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.128385 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.628350664 +0000 UTC m=+173.329699165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.128571 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.128830 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.628819157 +0000 UTC m=+173.330167648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.229970 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.230164 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.730137458 +0000 UTC m=+173.431485949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.230286 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.230563 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.730556638 +0000 UTC m=+173.431905129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.331765 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.332153 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.832128587 +0000 UTC m=+173.533477078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.332414 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.332712 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.832699032 +0000 UTC m=+173.534047523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.433190 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.433398 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.933364576 +0000 UTC m=+173.634713727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.433663 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.434012 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.934000853 +0000 UTC m=+173.635349344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.445187 4793 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fbdzm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.445240 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" podUID="25ebc563-7e8a-4d8f-ace8-2d6c767816cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.445374 4793 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s2mcj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.445440 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.534927 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.535636 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.035604011 +0000 UTC m=+173.736952512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.636415 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.636680 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.136669996 +0000 UTC m=+173.838018487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.737592 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.737810 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.237779483 +0000 UTC m=+173.939127974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.737909 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.738287 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.238279095 +0000 UTC m=+173.939627586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.839455 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.839677 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.339657348 +0000 UTC m=+174.041005839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.839839 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.840157 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.340148651 +0000 UTC m=+174.041497142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.940693 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.940882 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.440856697 +0000 UTC m=+174.142205188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.940960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.941418 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.441410531 +0000 UTC m=+174.142759022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.041893 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.042093 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.542063315 +0000 UTC m=+174.243411816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.042315 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.042699 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.542689781 +0000 UTC m=+174.244038272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.060331 4793 csr.go:261] certificate signing request csr-drqs4 is approved, waiting to be issued Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.081103 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:03 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:03 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:03 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.081160 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.097268 4793 csr.go:257] certificate signing request csr-drqs4 is issued Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.143625 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.143928 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.643913651 +0000 UTC m=+174.345262142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.245482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.245931 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.74591559 +0000 UTC m=+174.447264081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.347015 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.347299 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.847285532 +0000 UTC m=+174.548634023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.456769 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.457095 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.957083266 +0000 UTC m=+174.658431757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.485166 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" event={"ID":"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e","Type":"ContainerStarted","Data":"b95be365117fbc3c51a9abafa8ddf9eb5242ebf0fda4266d57f4ed480b28135e"} Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.557364 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.557520 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.057502104 +0000 UTC m=+174.758850585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.557590 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.557936 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.057919965 +0000 UTC m=+174.759268456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.658328 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.658699 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.158670331 +0000 UTC m=+174.860018882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.658742 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.659215 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.159200286 +0000 UTC m=+174.860548777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.699636 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.700367 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.712323 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.713472 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.755831 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.760747 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.760987 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9ad6625-d668-4687-aae5-d2363abda627-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.761084 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9ad6625-d668-4687-aae5-d2363abda627-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.761214 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.261197215 +0000 UTC m=+174.962545706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.841196 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.842029 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.862432 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9ad6625-d668-4687-aae5-d2363abda627-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.862514 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.862572 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9ad6625-d668-4687-aae5-d2363abda627-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.862963 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9ad6625-d668-4687-aae5-d2363abda627-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.863214 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.363203094 +0000 UTC m=+175.064551585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.929806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9ad6625-d668-4687-aae5-d2363abda627-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.978113 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.978411 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.47838672 +0000 UTC m=+175.179735211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.978470 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.979286 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.479277493 +0000 UTC m=+175.180625984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.019468 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.040153 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g9t8x"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.042254 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.071741 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.079116 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.079264 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.579238699 +0000 UTC m=+175.280587210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.079440 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.079701 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.5796905 +0000 UTC m=+175.281038991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.082119 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:04 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:04 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:04 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.082165 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.094075 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g9t8x"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.098813 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 13:41:03 +0000 UTC, rotation deadline is 2026-10-24 18:21:48.974983894 +0000 UTC Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.098879 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6412h35m44.8761246s for next certificate rotation Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.150363 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.180641 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.180944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg5zv\" (UniqueName: \"kubernetes.io/projected/b34660b0-a161-4587-96a6-1a86a2e3f632-kube-api-access-zg5zv\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.181007 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-catalog-content\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.181042 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-utilities\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.181198 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.681175876 +0000 UTC m=+175.382524367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.185538 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6qnl2"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.194385 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.220847 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.255419 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qnl2"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284727 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9nnp\" (UniqueName: \"kubernetes.io/projected/840c8b00-73a4-4378-b5a8-83f2595916a4-kube-api-access-p9nnp\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284775 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg5zv\" (UniqueName: \"kubernetes.io/projected/b34660b0-a161-4587-96a6-1a86a2e3f632-kube-api-access-zg5zv\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284808 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284828 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-catalog-content\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284859 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-catalog-content\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284880 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-utilities\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284899 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-utilities\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.287239 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.787223612 +0000 UTC m=+175.488572113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.287370 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-catalog-content\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.287734 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-utilities\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.325464 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j4vzj"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.326677 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.344282 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg5zv\" (UniqueName: \"kubernetes.io/projected/b34660b0-a161-4587-96a6-1a86a2e3f632-kube-api-access-zg5zv\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.368799 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.378286 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4vzj"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.381638 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387013 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.387139 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.887125076 +0000 UTC m=+175.588473567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387455 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9nnp\" (UniqueName: \"kubernetes.io/projected/840c8b00-73a4-4378-b5a8-83f2595916a4-kube-api-access-p9nnp\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387506 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387535 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm6vk\" (UniqueName: \"kubernetes.io/projected/02ec4db2-0283-437a-999f-d50a10ab046c-kube-api-access-hm6vk\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387565 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-catalog-content\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387594 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-utilities\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387621 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-catalog-content\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387637 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-utilities\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.391139 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-catalog-content\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.391453 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.89143794 +0000 UTC m=+175.592786441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.392330 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-utilities\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.459912 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9nnp\" (UniqueName: \"kubernetes.io/projected/840c8b00-73a4-4378-b5a8-83f2595916a4-kube-api-access-p9nnp\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.463793 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.463838 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.464097 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.464117 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.489435 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.489959 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm6vk\" (UniqueName: \"kubernetes.io/projected/02ec4db2-0283-437a-999f-d50a10ab046c-kube-api-access-hm6vk\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.490021 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-catalog-content\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.490094 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-utilities\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.490932 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.990914123 +0000 UTC m=+175.692262614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.494132 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-utilities\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.494428 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-catalog-content\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.540584 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" event={"ID":"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e","Type":"ContainerStarted","Data":"414b16d92436bb895949171adfbbc26c557f08c47f27890387d84b19dad2dd36"} Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.541452 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9t46g"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.542493 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.546455 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm6vk\" (UniqueName: \"kubernetes.io/projected/02ec4db2-0283-437a-999f-d50a10ab046c-kube-api-access-hm6vk\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.554897 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.585954 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t46g"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.591543 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-catalog-content\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.591584 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2blm\" (UniqueName: \"kubernetes.io/projected/551044e9-867a-4307-a28c-ea34bab39473-kube-api-access-b2blm\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.591649 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.591833 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-utilities\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.593692 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.093645961 +0000 UTC m=+175.794994522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.692540 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.692953 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.693209 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.193190905 +0000 UTC m=+175.894539386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.693294 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-catalog-content\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.693325 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2blm\" (UniqueName: \"kubernetes.io/projected/551044e9-867a-4307-a28c-ea34bab39473-kube-api-access-b2blm\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.693367 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.693475 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-utilities\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.695362 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.195352932 +0000 UTC m=+175.896701423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.728625 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.728670 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.757256 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.799679 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.801228 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.301204992 +0000 UTC m=+176.002553543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.821008 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.821072 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.845245 4793 patch_prober.go:28] interesting pod/console-f9d7485db-kknzc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.845321 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kknzc" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.879130 4793 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cwwfj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]log ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]etcd ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/max-in-flight-filter ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 13:46:04 crc kubenswrapper[4793]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 13:46:04 crc kubenswrapper[4793]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-startinformers ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 13:46:04 crc kubenswrapper[4793]: livez check failed Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.879190 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" podUID="ea703d52-c081-418f-9343-61b68296314f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.889405 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-utilities\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.889405 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-catalog-content\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.899859 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2blm\" (UniqueName: \"kubernetes.io/projected/551044e9-867a-4307-a28c-ea34bab39473-kube-api-access-b2blm\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.903069 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.904884 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.404868495 +0000 UTC m=+176.106216976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.905140 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.943090 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 13:46:04 crc kubenswrapper[4793]: W0130 13:46:04.987176 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode9ad6625_d668_4687_aae5_d2363abda627.slice/crio-8015b0546ef3f98dfbde3c8621c176730ee95ee7767185d6e04f9b83c4d7ae4e WatchSource:0}: Error finding container 8015b0546ef3f98dfbde3c8621c176730ee95ee7767185d6e04f9b83c4d7ae4e: Status 404 returned error can't find the container with id 8015b0546ef3f98dfbde3c8621c176730ee95ee7767185d6e04f9b83c4d7ae4e Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.006215 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.006631 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.506613838 +0000 UTC m=+176.207962329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.013783 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.072814 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.078910 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.085614 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:05 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:05 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:05 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.085677 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.108075 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.108468 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.608452993 +0000 UTC m=+176.309801484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.209389 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.210219 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.710190075 +0000 UTC m=+176.411538576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.313159 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.313509 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.813497789 +0000 UTC m=+176.514846280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.414641 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.415253 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.915239322 +0000 UTC m=+176.616587813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.518877 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.519260 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.019247484 +0000 UTC m=+176.720595975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.533139 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.545426 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" event={"ID":"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e","Type":"ContainerStarted","Data":"c3fdd23e324e7fe9c6a51444399362039955a7540651b25e89debd5484d5d7b2"} Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.547276 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9ad6625-d668-4687-aae5-d2363abda627","Type":"ContainerStarted","Data":"8015b0546ef3f98dfbde3c8621c176730ee95ee7767185d6e04f9b83c4d7ae4e"} Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.560220 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.621607 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.623239 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.123226424 +0000 UTC m=+176.824574915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.722774 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.723213 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.223190981 +0000 UTC m=+176.924539532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.783377 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.795215 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.823439 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.824109 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.324093581 +0000 UTC m=+177.025442072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.874153 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" podStartSLOduration=13.874135586 podStartE2EDuration="13.874135586s" podCreationTimestamp="2026-01-30 13:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:05.873031147 +0000 UTC m=+176.574379638" watchObservedRunningTime="2026-01-30 13:46:05.874135586 +0000 UTC m=+176.575484077" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.908872 4793 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.924874 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.925225 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.425212727 +0000 UTC m=+177.126561218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.025964 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.026402 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.526384794 +0000 UTC m=+177.227733275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.082812 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:06 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:06 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:06 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.082857 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.083889 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g9t8x"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.110559 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4vzj"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.140944 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.141296 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.641284332 +0000 UTC m=+177.342632823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.173006 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kvlgd"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.173968 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.185398 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.244649 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.244969 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.744943225 +0000 UTC m=+177.446291716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.246660 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-utilities\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.246886 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-catalog-content\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.247003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.247132 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhvt4\" (UniqueName: \"kubernetes.io/projected/08b55ba0-087d-42ec-a0c5-538f0a3c0987-kube-api-access-nhvt4\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.247375 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.747365939 +0000 UTC m=+177.448714430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.299111 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvlgd"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.321479 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qnl2"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.347267 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t46g"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.347786 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.348119 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhvt4\" (UniqueName: \"kubernetes.io/projected/08b55ba0-087d-42ec-a0c5-538f0a3c0987-kube-api-access-nhvt4\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.348211 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-utilities\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.348379 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-catalog-content\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.348777 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.848755373 +0000 UTC m=+177.550103924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.348930 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-catalog-content\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.349103 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-utilities\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.400063 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhvt4\" (UniqueName: \"kubernetes.io/projected/08b55ba0-087d-42ec-a0c5-538f0a3c0987-kube-api-access-nhvt4\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: W0130 13:46:06.408499 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod551044e9_867a_4307_a28c_ea34bab39473.slice/crio-2755c7eacfd017f81d392f7b77b2261e36a1e0f02e74ee8dd73cb61fa736268b WatchSource:0}: Error finding container 2755c7eacfd017f81d392f7b77b2261e36a1e0f02e74ee8dd73cb61fa736268b: Status 404 returned error can't find the container with id 2755c7eacfd017f81d392f7b77b2261e36a1e0f02e74ee8dd73cb61fa736268b Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.449579 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.449964 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.949952811 +0000 UTC m=+177.651301302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.529772 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mn7sx"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.537265 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.537752 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.550745 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.551233 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:07.051216861 +0000 UTC m=+177.752565342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.578606 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn7sx"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.593621 4793 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T13:46:05.909141125Z","Handler":null,"Name":""} Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.600210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerStarted","Data":"2755c7eacfd017f81d392f7b77b2261e36a1e0f02e74ee8dd73cb61fa736268b"} Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.620292 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerStarted","Data":"c106e074002678528ae31ccdf1bb58932690b2a742055da2e9f297d7f5cc6c7c"} Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.628922 4793 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.628958 4793 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.636367 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerStarted","Data":"0e22ed488b0d95eaf0cf80ba9106bf9da157b5ab0630c5fce06e88b1a1a7e207"} Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.643684 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vzj" event={"ID":"02ec4db2-0283-437a-999f-d50a10ab046c","Type":"ContainerStarted","Data":"ee249470c28be7e643027b7d1d76ee1a880e2751bfa6c780b72800ea7daeb066"} Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.653987 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-catalog-content\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.654092 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.654149 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-utilities\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.654215 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn89t\" (UniqueName: \"kubernetes.io/projected/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-kube-api-access-mn89t\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.658854 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9ad6625-d668-4687-aae5-d2363abda627","Type":"ContainerStarted","Data":"8b8825b53f65bff81a9400879a415d5b1dc1d84fe8464a986eee69eada339360"} Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.696320 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.696302212 podStartE2EDuration="3.696302212s" podCreationTimestamp="2026-01-30 13:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:06.694436802 +0000 UTC m=+177.395785313" watchObservedRunningTime="2026-01-30 13:46:06.696302212 +0000 UTC m=+177.397650703" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.755956 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-utilities\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.756100 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn89t\" (UniqueName: \"kubernetes.io/projected/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-kube-api-access-mn89t\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.756173 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-catalog-content\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.756440 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-utilities\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.757190 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-catalog-content\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.788846 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn89t\" (UniqueName: \"kubernetes.io/projected/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-kube-api-access-mn89t\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.914674 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.992466 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.993398 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.998785 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.999039 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.006110 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.006164 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.009901 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.060537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8886f940-a230-480f-a911-8caa96286196-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.060604 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8886f940-a230-480f-a911-8caa96286196-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.078938 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvlgd"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.084308 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:07 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:07 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:07 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.084353 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.099814 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.118605 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vn6kf"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.120643 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.125534 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.128492 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vn6kf"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.161738 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.161935 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8886f940-a230-480f-a911-8caa96286196-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.162007 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8886f940-a230-480f-a911-8caa96286196-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.162428 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8886f940-a230-480f-a911-8caa96286196-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.167685 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.199557 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8886f940-a230-480f-a911-8caa96286196-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.262872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwrln\" (UniqueName: \"kubernetes.io/projected/89a43c58-d327-429a-96cd-9f9f5393368a-kube-api-access-pwrln\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.262957 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-catalog-content\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.262998 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-utilities\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.318857 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.338251 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn7sx"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.364012 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwrln\" (UniqueName: \"kubernetes.io/projected/89a43c58-d327-429a-96cd-9f9f5393368a-kube-api-access-pwrln\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.364098 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-catalog-content\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.364125 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-utilities\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.385014 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.416222 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-utilities\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.416494 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-catalog-content\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.428650 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwrln\" (UniqueName: \"kubernetes.io/projected/89a43c58-d327-429a-96cd-9f9f5393368a-kube-api-access-pwrln\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.456338 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.519087 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fxl8f"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.520280 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.530669 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fxl8f"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.569666 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-catalog-content\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.569714 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-utilities\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.569792 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w4dd\" (UniqueName: \"kubernetes.io/projected/0005ba9f-0f70-4df4-b588-8e6f941fec61-kube-api-access-2w4dd\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.657241 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.671207 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w4dd\" (UniqueName: \"kubernetes.io/projected/0005ba9f-0f70-4df4-b588-8e6f941fec61-kube-api-access-2w4dd\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.671293 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-catalog-content\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.672116 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-catalog-content\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.672138 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-utilities\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.672516 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-utilities\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.691406 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerStarted","Data":"f652789a637248503c2fc91700a36ad3f9de2a0dc0aa687e53dccfa3f8c0a8b5"} Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.718361 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w4dd\" (UniqueName: \"kubernetes.io/projected/0005ba9f-0f70-4df4-b588-8e6f941fec61-kube-api-access-2w4dd\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.723451 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerStarted","Data":"3b482005c537462a0ede36ab68d9d608d2121842b0870338080990e3d66e4059"} Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.742333 4793 generic.go:334] "Generic (PLEG): container finished" podID="02ec4db2-0283-437a-999f-d50a10ab046c" containerID="9d4a750d40d93b392b9501779e0e72734cfa6f671669f4891033addc84b52774" exitCode=0 Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.742420 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vzj" event={"ID":"02ec4db2-0283-437a-999f-d50a10ab046c","Type":"ContainerDied","Data":"9d4a750d40d93b392b9501779e0e72734cfa6f671669f4891033addc84b52774"} Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.756395 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.777151 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvlgd" event={"ID":"08b55ba0-087d-42ec-a0c5-538f0a3c0987","Type":"ContainerStarted","Data":"e438cc892f7ad0406801bd88b27ea7d9474a125c514f11d8ac2ab76f42215f27"} Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.822094 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerStarted","Data":"ad13ab2dd584826367febbb63bb47fc2488d332ee67905dd6b329b48680fd011"} Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.826931 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn7sx" event={"ID":"96451b9c-e42f-43ae-9f62-bc830fa1ad9d","Type":"ContainerStarted","Data":"097e24f55ac27743bd9630217aba68c9f9433798eb25d4a7ca41ee8c4336a653"} Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.848344 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.889405 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vn6kf"] Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.000128 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pfnjs"] Jan 30 13:46:08 crc kubenswrapper[4793]: W0130 13:46:08.028888 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6e18cea_cac6_4eb8_b8de_2885fcf57497.slice/crio-a08f554d2033f377796937c2541b63cf2f56fd0fbab97d4b3c4a88316aa86471 WatchSource:0}: Error finding container a08f554d2033f377796937c2541b63cf2f56fd0fbab97d4b3c4a88316aa86471: Status 404 returned error can't find the container with id a08f554d2033f377796937c2541b63cf2f56fd0fbab97d4b3c4a88316aa86471 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.101536 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:08 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:08 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:08 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.101609 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.135666 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fxl8f"] Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.404869 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.831941 4793 generic.go:334] "Generic (PLEG): container finished" podID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerID="bf4b42ce53f022eba5077f61f642433a8e1373279291fcdbe9bff308d17c0e0d" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.831980 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvlgd" event={"ID":"08b55ba0-087d-42ec-a0c5-538f0a3c0987","Type":"ContainerDied","Data":"bf4b42ce53f022eba5077f61f642433a8e1373279291fcdbe9bff308d17c0e0d"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.838355 4793 generic.go:334] "Generic (PLEG): container finished" podID="551044e9-867a-4307-a28c-ea34bab39473" containerID="ad13ab2dd584826367febbb63bb47fc2488d332ee67905dd6b329b48680fd011" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.838996 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerDied","Data":"ad13ab2dd584826367febbb63bb47fc2488d332ee67905dd6b329b48680fd011"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.848709 4793 generic.go:334] "Generic (PLEG): container finished" podID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerID="6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.848761 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn7sx" event={"ID":"96451b9c-e42f-43ae-9f62-bc830fa1ad9d","Type":"ContainerDied","Data":"6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.850205 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.853012 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" event={"ID":"d6e18cea-cac6-4eb8-b8de-2885fcf57497","Type":"ContainerStarted","Data":"a08f554d2033f377796937c2541b63cf2f56fd0fbab97d4b3c4a88316aa86471"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.862054 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.868489 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerStarted","Data":"13f1368c8d56c2f3e8a8787fdd36533c727a2ee0ef9f036522e165e8dc981e1f"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.880251 4793 generic.go:334] "Generic (PLEG): container finished" podID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerID="3b482005c537462a0ede36ab68d9d608d2121842b0870338080990e3d66e4059" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.880317 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerDied","Data":"3b482005c537462a0ede36ab68d9d608d2121842b0870338080990e3d66e4059"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.885654 4793 generic.go:334] "Generic (PLEG): container finished" podID="e9ad6625-d668-4687-aae5-d2363abda627" containerID="8b8825b53f65bff81a9400879a415d5b1dc1d84fe8464a986eee69eada339360" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.885727 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9ad6625-d668-4687-aae5-d2363abda627","Type":"ContainerDied","Data":"8b8825b53f65bff81a9400879a415d5b1dc1d84fe8464a986eee69eada339360"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.890188 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8886f940-a230-480f-a911-8caa96286196","Type":"ContainerStarted","Data":"2a36caa8c6f67671e2dde28b9bd4479d99be637b04d8c44f3c236b38be207c24"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.892221 4793 generic.go:334] "Generic (PLEG): container finished" podID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerID="f652789a637248503c2fc91700a36ad3f9de2a0dc0aa687e53dccfa3f8c0a8b5" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.893152 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerDied","Data":"f652789a637248503c2fc91700a36ad3f9de2a0dc0aa687e53dccfa3f8c0a8b5"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.898211 4793 generic.go:334] "Generic (PLEG): container finished" podID="6db0dcc6-874c-40f9-a0b7-309149c78f48" containerID="0003a0f96b0d450dcabcfae0a5907ebc6be8013da3e854ca4f0bce212cb173a6" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.898310 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" event={"ID":"6db0dcc6-874c-40f9-a0b7-309149c78f48","Type":"ContainerDied","Data":"0003a0f96b0d450dcabcfae0a5907ebc6be8013da3e854ca4f0bce212cb173a6"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.905223 4793 generic.go:334] "Generic (PLEG): container finished" podID="89a43c58-d327-429a-96cd-9f9f5393368a" containerID="1292ed33cb4910e7379d650e9bdaa57110f788906801a44590e292cca7705790" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.905441 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerDied","Data":"1292ed33cb4910e7379d650e9bdaa57110f788906801a44590e292cca7705790"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.905547 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerStarted","Data":"1f4643d93c77f9c1fa9d15f80b1a4b9e9c2ad2fc279deeae64b1715da148c011"} Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.084749 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:09 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:09 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:09 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.084816 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.917713 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8886f940-a230-480f-a911-8caa96286196","Type":"ContainerStarted","Data":"eb700f355f93bc4ce723121dea6e4b20a49a9db0e924cab9c3f4211a583c1f98"} Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.920372 4793 generic.go:334] "Generic (PLEG): container finished" podID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerID="11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e" exitCode=0 Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.920440 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerDied","Data":"11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e"} Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.930297 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" event={"ID":"d6e18cea-cac6-4eb8-b8de-2885fcf57497","Type":"ContainerStarted","Data":"2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3"} Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.930345 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.949176 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.949159035 podStartE2EDuration="3.949159035s" podCreationTimestamp="2026-01-30 13:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:09.935189058 +0000 UTC m=+180.636537559" watchObservedRunningTime="2026-01-30 13:46:09.949159035 +0000 UTC m=+180.650507526" Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.969826 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" podStartSLOduration=157.969810538 podStartE2EDuration="2m37.969810538s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:09.967842326 +0000 UTC m=+180.669190817" watchObservedRunningTime="2026-01-30 13:46:09.969810538 +0000 UTC m=+180.671159029" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.082266 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:10 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:10 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:10 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.082341 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.259522 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.266896 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.323090 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9ad6625-d668-4687-aae5-d2363abda627-kube-api-access\") pod \"e9ad6625-d668-4687-aae5-d2363abda627\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.326271 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qxpm\" (UniqueName: \"kubernetes.io/projected/6db0dcc6-874c-40f9-a0b7-309149c78f48-kube-api-access-2qxpm\") pod \"6db0dcc6-874c-40f9-a0b7-309149c78f48\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.326387 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6db0dcc6-874c-40f9-a0b7-309149c78f48-config-volume\") pod \"6db0dcc6-874c-40f9-a0b7-309149c78f48\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.326419 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6db0dcc6-874c-40f9-a0b7-309149c78f48-secret-volume\") pod \"6db0dcc6-874c-40f9-a0b7-309149c78f48\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.326471 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9ad6625-d668-4687-aae5-d2363abda627-kubelet-dir\") pod \"e9ad6625-d668-4687-aae5-d2363abda627\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.327161 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db0dcc6-874c-40f9-a0b7-309149c78f48-config-volume" (OuterVolumeSpecName: "config-volume") pod "6db0dcc6-874c-40f9-a0b7-309149c78f48" (UID: "6db0dcc6-874c-40f9-a0b7-309149c78f48"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.327468 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6db0dcc6-874c-40f9-a0b7-309149c78f48-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.327501 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9ad6625-d668-4687-aae5-d2363abda627-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e9ad6625-d668-4687-aae5-d2363abda627" (UID: "e9ad6625-d668-4687-aae5-d2363abda627"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.340902 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db0dcc6-874c-40f9-a0b7-309149c78f48-kube-api-access-2qxpm" (OuterVolumeSpecName: "kube-api-access-2qxpm") pod "6db0dcc6-874c-40f9-a0b7-309149c78f48" (UID: "6db0dcc6-874c-40f9-a0b7-309149c78f48"). InnerVolumeSpecName "kube-api-access-2qxpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.343176 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ad6625-d668-4687-aae5-d2363abda627-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e9ad6625-d668-4687-aae5-d2363abda627" (UID: "e9ad6625-d668-4687-aae5-d2363abda627"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.347395 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db0dcc6-874c-40f9-a0b7-309149c78f48-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6db0dcc6-874c-40f9-a0b7-309149c78f48" (UID: "6db0dcc6-874c-40f9-a0b7-309149c78f48"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.428301 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9ad6625-d668-4687-aae5-d2363abda627-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.428364 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9ad6625-d668-4687-aae5-d2363abda627-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.428392 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qxpm\" (UniqueName: \"kubernetes.io/projected/6db0dcc6-874c-40f9-a0b7-309149c78f48-kube-api-access-2qxpm\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.428404 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6db0dcc6-874c-40f9-a0b7-309149c78f48-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.639201 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2lf59" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.935136 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" event={"ID":"6db0dcc6-874c-40f9-a0b7-309149c78f48","Type":"ContainerDied","Data":"02184320f6531b0c82ba4d167218eef7190463e44618fd9bd7006fada9858678"} Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.935176 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02184320f6531b0c82ba4d167218eef7190463e44618fd9bd7006fada9858678" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.935280 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.939498 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.939509 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9ad6625-d668-4687-aae5-d2363abda627","Type":"ContainerDied","Data":"8015b0546ef3f98dfbde3c8621c176730ee95ee7767185d6e04f9b83c4d7ae4e"} Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.940015 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8015b0546ef3f98dfbde3c8621c176730ee95ee7767185d6e04f9b83c4d7ae4e" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.942386 4793 generic.go:334] "Generic (PLEG): container finished" podID="8886f940-a230-480f-a911-8caa96286196" containerID="eb700f355f93bc4ce723121dea6e4b20a49a9db0e924cab9c3f4211a583c1f98" exitCode=0 Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.942429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8886f940-a230-480f-a911-8caa96286196","Type":"ContainerDied","Data":"eb700f355f93bc4ce723121dea6e4b20a49a9db0e924cab9c3f4211a583c1f98"} Jan 30 13:46:11 crc kubenswrapper[4793]: I0130 13:46:11.083847 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:11 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:11 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:11 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:11 crc kubenswrapper[4793]: I0130 13:46:11.083949 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.082825 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:12 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:12 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:12 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.082871 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.287279 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.359124 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8886f940-a230-480f-a911-8caa96286196-kubelet-dir\") pod \"8886f940-a230-480f-a911-8caa96286196\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.360071 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8886f940-a230-480f-a911-8caa96286196-kube-api-access\") pod \"8886f940-a230-480f-a911-8caa96286196\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.359325 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8886f940-a230-480f-a911-8caa96286196-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8886f940-a230-480f-a911-8caa96286196" (UID: "8886f940-a230-480f-a911-8caa96286196"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.394780 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8886f940-a230-480f-a911-8caa96286196-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8886f940-a230-480f-a911-8caa96286196" (UID: "8886f940-a230-480f-a911-8caa96286196"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.414419 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.414519 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.463201 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8886f940-a230-480f-a911-8caa96286196-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.463236 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8886f940-a230-480f-a911-8caa96286196-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.954588 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8886f940-a230-480f-a911-8caa96286196","Type":"ContainerDied","Data":"2a36caa8c6f67671e2dde28b9bd4479d99be637b04d8c44f3c236b38be207c24"} Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.954627 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a36caa8c6f67671e2dde28b9bd4479d99be637b04d8c44f3c236b38be207c24" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.954699 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:13 crc kubenswrapper[4793]: I0130 13:46:13.087669 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:13 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:13 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:13 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:13 crc kubenswrapper[4793]: I0130 13:46:13.087732 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.080562 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:14 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:14 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:14 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.080909 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.463101 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.463165 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.463109 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.463536 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.821281 4793 patch_prober.go:28] interesting pod/console-f9d7485db-kknzc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.821337 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kknzc" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 30 13:46:15 crc kubenswrapper[4793]: I0130 13:46:15.082224 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:15 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:15 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:15 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:15 crc kubenswrapper[4793]: I0130 13:46:15.082288 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:16 crc kubenswrapper[4793]: I0130 13:46:16.081025 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:16 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:16 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:16 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:16 crc kubenswrapper[4793]: I0130 13:46:16.081093 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:17 crc kubenswrapper[4793]: I0130 13:46:17.080529 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:17 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:17 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:17 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:17 crc kubenswrapper[4793]: I0130 13:46:17.080589 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:18 crc kubenswrapper[4793]: I0130 13:46:18.080642 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:46:18 crc kubenswrapper[4793]: I0130 13:46:18.083090 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:46:21 crc kubenswrapper[4793]: I0130 13:46:21.055638 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qsdzw"] Jan 30 13:46:21 crc kubenswrapper[4793]: I0130 13:46:21.060055 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl"] Jan 30 13:46:21 crc kubenswrapper[4793]: I0130 13:46:21.067174 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" containerID="cri-o://d19f43efe0461581ea609f879abb2a31d725dd71966c84254d6bb05f0e18ea46" gracePeriod=30 Jan 30 13:46:21 crc kubenswrapper[4793]: I0130 13:46:21.067399 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" containerID="cri-o://9fce52fd4df200cd47b1ec015ae5f6e141a21db87359d7fd523e3ede8826e2ec" gracePeriod=30 Jan 30 13:46:22 crc kubenswrapper[4793]: I0130 13:46:22.038660 4793 generic.go:334] "Generic (PLEG): container finished" podID="7dbc78d6-c879-4284-89b6-169d359839bf" containerID="9fce52fd4df200cd47b1ec015ae5f6e141a21db87359d7fd523e3ede8826e2ec" exitCode=0 Jan 30 13:46:22 crc kubenswrapper[4793]: I0130 13:46:22.038741 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" event={"ID":"7dbc78d6-c879-4284-89b6-169d359839bf","Type":"ContainerDied","Data":"9fce52fd4df200cd47b1ec015ae5f6e141a21db87359d7fd523e3ede8826e2ec"} Jan 30 13:46:22 crc kubenswrapper[4793]: I0130 13:46:22.041374 4793 generic.go:334] "Generic (PLEG): container finished" podID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerID="d19f43efe0461581ea609f879abb2a31d725dd71966c84254d6bb05f0e18ea46" exitCode=0 Jan 30 13:46:22 crc kubenswrapper[4793]: I0130 13:46:22.041403 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" event={"ID":"268883cf-a27e-4b69-bd41-18f0a35c3e6a","Type":"ContainerDied","Data":"d19f43efe0461581ea609f879abb2a31d725dd71966c84254d6bb05f0e18ea46"} Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.069645 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.069758 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.462804 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.462874 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.462816 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.462964 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.463010 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.463601 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"f99529531b1a090c1e9f4ecee92d599c59303bd9a673012fd1cacb5057890818"} pod="openshift-console/downloads-7954f5f757-sd6hs" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.463708 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" containerID="cri-o://f99529531b1a090c1e9f4ecee92d599c59303bd9a673012fd1cacb5057890818" gracePeriod=2 Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.464226 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.464264 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.831878 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.839465 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:46:25 crc kubenswrapper[4793]: I0130 13:46:25.009713 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:46:25 crc kubenswrapper[4793]: I0130 13:46:25.009775 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:46:25 crc kubenswrapper[4793]: I0130 13:46:25.086277 4793 generic.go:334] "Generic (PLEG): container finished" podID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerID="f99529531b1a090c1e9f4ecee92d599c59303bd9a673012fd1cacb5057890818" exitCode=0 Jan 30 13:46:25 crc kubenswrapper[4793]: I0130 13:46:25.086836 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sd6hs" event={"ID":"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2","Type":"ContainerDied","Data":"f99529531b1a090c1e9f4ecee92d599c59303bd9a673012fd1cacb5057890818"} Jan 30 13:46:27 crc kubenswrapper[4793]: I0130 13:46:27.393313 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:34 crc kubenswrapper[4793]: I0130 13:46:34.070273 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 13:46:34 crc kubenswrapper[4793]: I0130 13:46:34.070343 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 13:46:34 crc kubenswrapper[4793]: I0130 13:46:34.463189 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:34 crc kubenswrapper[4793]: I0130 13:46:34.463260 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:35 crc kubenswrapper[4793]: I0130 13:46:35.009911 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:46:35 crc kubenswrapper[4793]: I0130 13:46:35.010281 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:46:35 crc kubenswrapper[4793]: I0130 13:46:35.751662 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:46:42 crc kubenswrapper[4793]: I0130 13:46:42.413878 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:46:42 crc kubenswrapper[4793]: I0130 13:46:42.414471 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:46:42 crc kubenswrapper[4793]: I0130 13:46:42.414518 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:46:42 crc kubenswrapper[4793]: I0130 13:46:42.415126 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:46:42 crc kubenswrapper[4793]: I0130 13:46:42.415193 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629" gracePeriod=600 Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.183958 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629" exitCode=0 Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.184032 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629"} Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.463510 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.463588 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986090 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 13:46:44 crc kubenswrapper[4793]: E0130 13:46:44.986413 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db0dcc6-874c-40f9-a0b7-309149c78f48" containerName="collect-profiles" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986433 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db0dcc6-874c-40f9-a0b7-309149c78f48" containerName="collect-profiles" Jan 30 13:46:44 crc kubenswrapper[4793]: E0130 13:46:44.986446 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8886f940-a230-480f-a911-8caa96286196" containerName="pruner" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986454 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8886f940-a230-480f-a911-8caa96286196" containerName="pruner" Jan 30 13:46:44 crc kubenswrapper[4793]: E0130 13:46:44.986477 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ad6625-d668-4687-aae5-d2363abda627" containerName="pruner" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986484 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ad6625-d668-4687-aae5-d2363abda627" containerName="pruner" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986591 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ad6625-d668-4687-aae5-d2363abda627" containerName="pruner" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986602 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8886f940-a230-480f-a911-8caa96286196" containerName="pruner" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986612 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db0dcc6-874c-40f9-a0b7-309149c78f48" containerName="collect-profiles" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.987085 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.989195 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.989602 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.994163 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.008975 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.009230 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.070286 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.070362 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.162180 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd400d07-c5a8-40c2-9c01-dab9908caf49-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.162447 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd400d07-c5a8-40c2-9c01-dab9908caf49-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.263232 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd400d07-c5a8-40c2-9c01-dab9908caf49-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.263349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd400d07-c5a8-40c2-9c01-dab9908caf49-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.263384 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd400d07-c5a8-40c2-9c01-dab9908caf49-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.281825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd400d07-c5a8-40c2-9c01-dab9908caf49-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.313576 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.386566 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.387596 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.402914 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.553302 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kube-api-access\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.553363 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.553391 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-var-lock\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.654251 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kube-api-access\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.654364 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.654420 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-var-lock\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.654517 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-var-lock\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.654542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.674754 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kube-api-access\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.758959 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:54 crc kubenswrapper[4793]: I0130 13:46:54.463614 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:54 crc kubenswrapper[4793]: I0130 13:46:54.463925 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:55 crc kubenswrapper[4793]: I0130 13:46:55.009526 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:46:55 crc kubenswrapper[4793]: I0130 13:46:55.009587 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:46:55 crc kubenswrapper[4793]: I0130 13:46:55.070576 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 13:46:55 crc kubenswrapper[4793]: I0130 13:46:55.070633 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:46:57 crc kubenswrapper[4793]: E0130 13:46:57.874542 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea\": context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 13:46:57 crc kubenswrapper[4793]: E0130 13:46:57.875427 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2blm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9t46g_openshift-marketplace(551044e9-867a-4307-a28c-ea34bab39473): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea\": context canceled" logger="UnhandledError" Jan 30 13:46:57 crc kubenswrapper[4793]: E0130 13:46:57.876698 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea: Get \\\"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea\\\": context canceled\"" pod="openshift-marketplace/community-operators-9t46g" podUID="551044e9-867a-4307-a28c-ea34bab39473" Jan 30 13:47:03 crc kubenswrapper[4793]: E0130 13:47:03.910018 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9t46g" podUID="551044e9-867a-4307-a28c-ea34bab39473" Jan 30 13:47:04 crc kubenswrapper[4793]: I0130 13:47:04.463663 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:04 crc kubenswrapper[4793]: I0130 13:47:04.463724 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:05 crc kubenswrapper[4793]: I0130 13:47:05.011437 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:47:05 crc kubenswrapper[4793]: I0130 13:47:05.011496 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:47:05 crc kubenswrapper[4793]: I0130 13:47:05.070897 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: i/o timeout" start-of-body= Jan 30 13:47:05 crc kubenswrapper[4793]: I0130 13:47:05.070957 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: i/o timeout" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.334790 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" event={"ID":"7dbc78d6-c879-4284-89b6-169d359839bf","Type":"ContainerDied","Data":"029de3b1f28797b6cbbf4b7545deaf6781dd6b3401588287ec9fa2ad62c13962"} Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.335321 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="029de3b1f28797b6cbbf4b7545deaf6781dd6b3401588287ec9fa2ad62c13962" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.439697 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.490954 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl"] Jan 30 13:47:13 crc kubenswrapper[4793]: E0130 13:47:13.491399 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.491416 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.491614 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.492170 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.497526 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl"] Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614122 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-config\") pod \"7dbc78d6-c879-4284-89b6-169d359839bf\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614182 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbc78d6-c879-4284-89b6-169d359839bf-serving-cert\") pod \"7dbc78d6-c879-4284-89b6-169d359839bf\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614204 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-client-ca\") pod \"7dbc78d6-c879-4284-89b6-169d359839bf\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614236 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mhtj\" (UniqueName: \"kubernetes.io/projected/7dbc78d6-c879-4284-89b6-169d359839bf-kube-api-access-9mhtj\") pod \"7dbc78d6-c879-4284-89b6-169d359839bf\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614487 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-config\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614540 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-client-ca\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614597 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94q78\" (UniqueName: \"kubernetes.io/projected/11837748-ddd9-46ac-8f23-b0b77c511c39-kube-api-access-94q78\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614628 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11837748-ddd9-46ac-8f23-b0b77c511c39-serving-cert\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.615337 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-config" (OuterVolumeSpecName: "config") pod "7dbc78d6-c879-4284-89b6-169d359839bf" (UID: "7dbc78d6-c879-4284-89b6-169d359839bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.615845 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "7dbc78d6-c879-4284-89b6-169d359839bf" (UID: "7dbc78d6-c879-4284-89b6-169d359839bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.621707 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbc78d6-c879-4284-89b6-169d359839bf-kube-api-access-9mhtj" (OuterVolumeSpecName: "kube-api-access-9mhtj") pod "7dbc78d6-c879-4284-89b6-169d359839bf" (UID: "7dbc78d6-c879-4284-89b6-169d359839bf"). InnerVolumeSpecName "kube-api-access-9mhtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.625034 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbc78d6-c879-4284-89b6-169d359839bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7dbc78d6-c879-4284-89b6-169d359839bf" (UID: "7dbc78d6-c879-4284-89b6-169d359839bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.716571 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94q78\" (UniqueName: \"kubernetes.io/projected/11837748-ddd9-46ac-8f23-b0b77c511c39-kube-api-access-94q78\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.716629 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11837748-ddd9-46ac-8f23-b0b77c511c39-serving-cert\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.716686 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-config\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.716728 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-client-ca\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.716880 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.717144 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbc78d6-c879-4284-89b6-169d359839bf-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.717167 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.717177 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mhtj\" (UniqueName: \"kubernetes.io/projected/7dbc78d6-c879-4284-89b6-169d359839bf-kube-api-access-9mhtj\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.717806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-client-ca\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.718419 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-config\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.723851 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11837748-ddd9-46ac-8f23-b0b77c511c39-serving-cert\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.733289 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94q78\" (UniqueName: \"kubernetes.io/projected/11837748-ddd9-46ac-8f23-b0b77c511c39-kube-api-access-94q78\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.817148 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:14 crc kubenswrapper[4793]: I0130 13:47:14.337515 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:47:14 crc kubenswrapper[4793]: I0130 13:47:14.368971 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl"] Jan 30 13:47:14 crc kubenswrapper[4793]: I0130 13:47:14.371486 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl"] Jan 30 13:47:14 crc kubenswrapper[4793]: I0130 13:47:14.404952 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" path="/var/lib/kubelet/pods/7dbc78d6-c879-4284-89b6-169d359839bf/volumes" Jan 30 13:47:14 crc kubenswrapper[4793]: I0130 13:47:14.463229 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:14 crc kubenswrapper[4793]: I0130 13:47:14.463282 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:14 crc kubenswrapper[4793]: E0130 13:47:14.652580 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 13:47:14 crc kubenswrapper[4793]: E0130 13:47:14.652713 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9nnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6qnl2_openshift-marketplace(840c8b00-73a4-4378-b5a8-83f2595916a4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:14 crc kubenswrapper[4793]: E0130 13:47:14.653880 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6qnl2" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" Jan 30 13:47:16 crc kubenswrapper[4793]: I0130 13:47:16.009577 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded" start-of-body= Jan 30 13:47:16 crc kubenswrapper[4793]: I0130 13:47:16.009979 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded" Jan 30 13:47:18 crc kubenswrapper[4793]: E0130 13:47:18.993468 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6qnl2" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" Jan 30 13:47:19 crc kubenswrapper[4793]: E0130 13:47:19.064519 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 13:47:19 crc kubenswrapper[4793]: E0130 13:47:19.064677 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2w4dd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fxl8f_openshift-marketplace(0005ba9f-0f70-4df4-b588-8e6f941fec61): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:19 crc kubenswrapper[4793]: E0130 13:47:19.066057 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fxl8f" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" Jan 30 13:47:19 crc kubenswrapper[4793]: I0130 13:47:19.178280 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.059933 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fxl8f" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.591410 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.591589 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zg5zv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g9t8x_openshift-marketplace(b34660b0-a161-4587-96a6-1a86a2e3f632): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.592945 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-g9t8x" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.737319 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.737494 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hm6vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-j4vzj_openshift-marketplace(02ec4db2-0283-437a-999f-d50a10ab046c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.738716 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-j4vzj" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" Jan 30 13:47:23 crc kubenswrapper[4793]: W0130 13:47:23.547975 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfd400d07_c5a8_40c2_9c01_dab9908caf49.slice/crio-f0c5de85690b60b1af61dd311dd1196ccd5e50683ae6a8ca24fed10893d3d8c9 WatchSource:0}: Error finding container f0c5de85690b60b1af61dd311dd1196ccd5e50683ae6a8ca24fed10893d3d8c9: Status 404 returned error can't find the container with id f0c5de85690b60b1af61dd311dd1196ccd5e50683ae6a8ca24fed10893d3d8c9 Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.549266 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-j4vzj" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.549798 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-g9t8x" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.635820 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.635969 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhvt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kvlgd_openshift-marketplace(08b55ba0-087d-42ec-a0c5-538f0a3c0987): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.637369 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kvlgd" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.680796 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.730451 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.730602 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mn89t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mn7sx_openshift-marketplace(96451b9c-e42f-43ae-9f62-bc830fa1ad9d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.731851 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mn7sx" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.749114 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74b476d486-lccjp"] Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.749990 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.750037 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.750461 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.752998 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.776528 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/268883cf-a27e-4b69-bd41-18f0a35c3e6a-serving-cert\") pod \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.776666 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-config\") pod \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.776722 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmq77\" (UniqueName: \"kubernetes.io/projected/268883cf-a27e-4b69-bd41-18f0a35c3e6a-kube-api-access-xmq77\") pod \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.776765 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-proxy-ca-bundles\") pod \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.776820 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-client-ca\") pod \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.777144 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-proxy-ca-bundles\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.777248 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clpjz\" (UniqueName: \"kubernetes.io/projected/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-kube-api-access-clpjz\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.777289 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-client-ca\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.777364 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-serving-cert\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.777414 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-config\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.781579 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-config" (OuterVolumeSpecName: "config") pod "268883cf-a27e-4b69-bd41-18f0a35c3e6a" (UID: "268883cf-a27e-4b69-bd41-18f0a35c3e6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.781806 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "268883cf-a27e-4b69-bd41-18f0a35c3e6a" (UID: "268883cf-a27e-4b69-bd41-18f0a35c3e6a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.781883 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-client-ca" (OuterVolumeSpecName: "client-ca") pod "268883cf-a27e-4b69-bd41-18f0a35c3e6a" (UID: "268883cf-a27e-4b69-bd41-18f0a35c3e6a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.786468 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268883cf-a27e-4b69-bd41-18f0a35c3e6a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "268883cf-a27e-4b69-bd41-18f0a35c3e6a" (UID: "268883cf-a27e-4b69-bd41-18f0a35c3e6a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.788392 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268883cf-a27e-4b69-bd41-18f0a35c3e6a-kube-api-access-xmq77" (OuterVolumeSpecName: "kube-api-access-xmq77") pod "268883cf-a27e-4b69-bd41-18f0a35c3e6a" (UID: "268883cf-a27e-4b69-bd41-18f0a35c3e6a"). InnerVolumeSpecName "kube-api-access-xmq77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.790715 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74b476d486-lccjp"] Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.871429 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.871980 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwrln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vn6kf_openshift-marketplace(89a43c58-d327-429a-96cd-9f9f5393368a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.873677 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vn6kf" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.880882 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-proxy-ca-bundles\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clpjz\" (UniqueName: \"kubernetes.io/projected/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-kube-api-access-clpjz\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881125 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-client-ca\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881208 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-serving-cert\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881260 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-config\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881322 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmq77\" (UniqueName: \"kubernetes.io/projected/268883cf-a27e-4b69-bd41-18f0a35c3e6a-kube-api-access-xmq77\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881336 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881347 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881377 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/268883cf-a27e-4b69-bd41-18f0a35c3e6a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881389 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.882902 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-proxy-ca-bundles\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.883197 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-client-ca\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.883364 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-config\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.887546 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-serving-cert\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.896951 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clpjz\" (UniqueName: \"kubernetes.io/projected/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-kube-api-access-clpjz\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.072195 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.122581 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.125044 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl"] Jan 30 13:47:24 crc kubenswrapper[4793]: W0130 13:47:24.136424 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfbfc4931_01b5_4cc0_a5f5_c3d4e42121a5.slice/crio-e8e047f8a8f147431c44c82ab17ef01b1add23ce519a6f0480d69181bc2cb61e WatchSource:0}: Error finding container e8e047f8a8f147431c44c82ab17ef01b1add23ce519a6f0480d69181bc2cb61e: Status 404 returned error can't find the container with id e8e047f8a8f147431c44c82ab17ef01b1add23ce519a6f0480d69181bc2cb61e Jan 30 13:47:24 crc kubenswrapper[4793]: W0130 13:47:24.153389 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11837748_ddd9_46ac_8f23_b0b77c511c39.slice/crio-7dc9d90c1797415bdef39e7d33ab7879a133a25249498487ec03f24fae4459fc WatchSource:0}: Error finding container 7dc9d90c1797415bdef39e7d33ab7879a133a25249498487ec03f24fae4459fc: Status 404 returned error can't find the container with id 7dc9d90c1797415bdef39e7d33ab7879a133a25249498487ec03f24fae4459fc Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.358094 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74b476d486-lccjp"] Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.387299 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"eb80942b6e6f56f06d5a97a5c92cee45946524669b2d3f8777363114c1c78ea4"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.390198 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" event={"ID":"11837748-ddd9-46ac-8f23-b0b77c511c39","Type":"ContainerStarted","Data":"7dc9d90c1797415bdef39e7d33ab7879a133a25249498487ec03f24fae4459fc"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.391602 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fd400d07-c5a8-40c2-9c01-dab9908caf49","Type":"ContainerStarted","Data":"e5b939e411d2d32f4a5a28df3de1f1b782b1984cc3579e1a45fcab992aaff3dd"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.391638 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fd400d07-c5a8-40c2-9c01-dab9908caf49","Type":"ContainerStarted","Data":"f0c5de85690b60b1af61dd311dd1196ccd5e50683ae6a8ca24fed10893d3d8c9"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.393250 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" event={"ID":"bb9452c1-1f30-4fd9-aaf3-49fd8266818d","Type":"ContainerStarted","Data":"a76af574ae39e77263355b1e3c87d747ab2f9d1604f79be4a37d4e9cca505251"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.396062 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" event={"ID":"268883cf-a27e-4b69-bd41-18f0a35c3e6a","Type":"ContainerDied","Data":"86ef773c0816c089c75665928f1abef5c6f766f515abfa5bb1d78513d4527722"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.396112 4793 scope.go:117] "RemoveContainer" containerID="d19f43efe0461581ea609f879abb2a31d725dd71966c84254d6bb05f0e18ea46" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.396122 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.413430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerStarted","Data":"8badd89e5ba818e3190858ac0610210fba8c0135f1eed3a6d67ab9234d8a776d"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.422820 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sd6hs" event={"ID":"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2","Type":"ContainerStarted","Data":"df43223f45f3ca6f694981bf211205045b8b9092bfab58e6c8f7a89f5b8ccd87"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.424278 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.425958 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.432754 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.460987 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=40.460973325 podStartE2EDuration="40.460973325s" podCreationTimestamp="2026-01-30 13:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:47:24.426187833 +0000 UTC m=+255.127536324" watchObservedRunningTime="2026-01-30 13:47:24.460973325 +0000 UTC m=+255.162321816" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.461609 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5","Type":"ContainerStarted","Data":"e8e047f8a8f147431c44c82ab17ef01b1add23ce519a6f0480d69181bc2cb61e"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.474748 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.474910 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:24 crc kubenswrapper[4793]: E0130 13:47:24.476256 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vn6kf" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.476920 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.477036 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:24 crc kubenswrapper[4793]: E0130 13:47:24.480317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mn7sx" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" Jan 30 13:47:24 crc kubenswrapper[4793]: E0130 13:47:24.482270 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kvlgd" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.592938 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qsdzw"] Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.595630 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qsdzw"] Jan 30 13:47:25 crc kubenswrapper[4793]: I0130 13:47:25.470970 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" event={"ID":"11837748-ddd9-46ac-8f23-b0b77c511c39","Type":"ContainerStarted","Data":"f20e6d0a2f5f4dcf508e55d955774b064398a8134d06063fb2bd0bca37715f3b"} Jan 30 13:47:25 crc kubenswrapper[4793]: I0130 13:47:25.479832 4793 generic.go:334] "Generic (PLEG): container finished" podID="551044e9-867a-4307-a28c-ea34bab39473" containerID="8badd89e5ba818e3190858ac0610210fba8c0135f1eed3a6d67ab9234d8a776d" exitCode=0 Jan 30 13:47:25 crc kubenswrapper[4793]: I0130 13:47:25.481295 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerDied","Data":"8badd89e5ba818e3190858ac0610210fba8c0135f1eed3a6d67ab9234d8a776d"} Jan 30 13:47:25 crc kubenswrapper[4793]: I0130 13:47:25.481435 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:25 crc kubenswrapper[4793]: I0130 13:47:25.481611 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.412179 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" path="/var/lib/kubelet/pods/268883cf-a27e-4b69-bd41-18f0a35c3e6a/volumes" Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.486690 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" event={"ID":"bb9452c1-1f30-4fd9-aaf3-49fd8266818d","Type":"ContainerStarted","Data":"6dc475d841ad7ccf7189817179fb736d89bc63690c21b60627e67fc5789a286b"} Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.488596 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5","Type":"ContainerStarted","Data":"0618ff92ae5b40adca08a74a83a3ae1b7472aacf6d9f5ce203122d3b72de0111"} Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.490681 4793 generic.go:334] "Generic (PLEG): container finished" podID="fd400d07-c5a8-40c2-9c01-dab9908caf49" containerID="e5b939e411d2d32f4a5a28df3de1f1b782b1984cc3579e1a45fcab992aaff3dd" exitCode=0 Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.490750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fd400d07-c5a8-40c2-9c01-dab9908caf49","Type":"ContainerDied","Data":"e5b939e411d2d32f4a5a28df3de1f1b782b1984cc3579e1a45fcab992aaff3dd"} Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.491608 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.491669 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.492013 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.500894 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.582094 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" podStartSLOduration=45.582074244 podStartE2EDuration="45.582074244s" podCreationTimestamp="2026-01-30 13:46:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:47:26.579607683 +0000 UTC m=+257.280956174" watchObservedRunningTime="2026-01-30 13:47:26.582074244 +0000 UTC m=+257.283422755" Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.514797 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" podStartSLOduration=47.51478177 podStartE2EDuration="47.51478177s" podCreationTimestamp="2026-01-30 13:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:47:27.514486663 +0000 UTC m=+258.215835154" watchObservedRunningTime="2026-01-30 13:47:27.51478177 +0000 UTC m=+258.216130261" Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.714161 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.757628 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd400d07-c5a8-40c2-9c01-dab9908caf49-kubelet-dir\") pod \"fd400d07-c5a8-40c2-9c01-dab9908caf49\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.757689 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd400d07-c5a8-40c2-9c01-dab9908caf49-kube-api-access\") pod \"fd400d07-c5a8-40c2-9c01-dab9908caf49\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.758064 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd400d07-c5a8-40c2-9c01-dab9908caf49-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fd400d07-c5a8-40c2-9c01-dab9908caf49" (UID: "fd400d07-c5a8-40c2-9c01-dab9908caf49"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.763035 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd400d07-c5a8-40c2-9c01-dab9908caf49-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fd400d07-c5a8-40c2-9c01-dab9908caf49" (UID: "fd400d07-c5a8-40c2-9c01-dab9908caf49"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.859376 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd400d07-c5a8-40c2-9c01-dab9908caf49-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.859411 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd400d07-c5a8-40c2-9c01-dab9908caf49-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:28 crc kubenswrapper[4793]: I0130 13:47:28.502269 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fd400d07-c5a8-40c2-9c01-dab9908caf49","Type":"ContainerDied","Data":"f0c5de85690b60b1af61dd311dd1196ccd5e50683ae6a8ca24fed10893d3d8c9"} Jan 30 13:47:28 crc kubenswrapper[4793]: I0130 13:47:28.502599 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0c5de85690b60b1af61dd311dd1196ccd5e50683ae6a8ca24fed10893d3d8c9" Jan 30 13:47:28 crc kubenswrapper[4793]: I0130 13:47:28.502360 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:47:30 crc kubenswrapper[4793]: I0130 13:47:30.437903 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=41.437874958 podStartE2EDuration="41.437874958s" podCreationTimestamp="2026-01-30 13:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:47:28.520482109 +0000 UTC m=+259.221830620" watchObservedRunningTime="2026-01-30 13:47:30.437874958 +0000 UTC m=+261.139223529" Jan 30 13:47:31 crc kubenswrapper[4793]: I0130 13:47:31.529314 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerStarted","Data":"bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03"} Jan 30 13:47:32 crc kubenswrapper[4793]: I0130 13:47:32.558246 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9t46g" podStartSLOduration=7.54184761 podStartE2EDuration="1m28.558226709s" podCreationTimestamp="2026-01-30 13:46:04 +0000 UTC" firstStartedPulling="2026-01-30 13:46:08.844176541 +0000 UTC m=+179.545525032" lastFinishedPulling="2026-01-30 13:47:29.86055563 +0000 UTC m=+260.561904131" observedRunningTime="2026-01-30 13:47:32.553913759 +0000 UTC m=+263.255262270" watchObservedRunningTime="2026-01-30 13:47:32.558226709 +0000 UTC m=+263.259575200" Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.072565 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.078068 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.463347 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.463405 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.463641 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.463690 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.905874 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.906246 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:47:36 crc kubenswrapper[4793]: I0130 13:47:36.685327 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9t46g" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="registry-server" probeResult="failure" output=< Jan 30 13:47:36 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 13:47:36 crc kubenswrapper[4793]: > Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.378741 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.379142 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.379240 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.381227 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.381540 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.381820 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.391595 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.395538 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.410216 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.433963 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.480333 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.484410 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.517610 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.527760 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.536563 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:47:44 crc kubenswrapper[4793]: I0130 13:47:44.481362 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:47:44 crc kubenswrapper[4793]: I0130 13:47:44.993409 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:47:45 crc kubenswrapper[4793]: I0130 13:47:45.035195 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:47:45 crc kubenswrapper[4793]: I0130 13:47:45.222189 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9t46g"] Jan 30 13:47:46 crc kubenswrapper[4793]: I0130 13:47:46.597553 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9t46g" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="registry-server" containerID="cri-o://bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03" gracePeriod=2 Jan 30 13:47:49 crc kubenswrapper[4793]: I0130 13:47:48.613109 4793 generic.go:334] "Generic (PLEG): container finished" podID="551044e9-867a-4307-a28c-ea34bab39473" containerID="bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03" exitCode=0 Jan 30 13:47:49 crc kubenswrapper[4793]: I0130 13:47:48.613390 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerDied","Data":"bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03"} Jan 30 13:47:54 crc kubenswrapper[4793]: E0130 13:47:54.906365 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03 is running failed: container process not found" containerID="bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 13:47:54 crc kubenswrapper[4793]: E0130 13:47:54.908127 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03 is running failed: container process not found" containerID="bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 13:47:54 crc kubenswrapper[4793]: E0130 13:47:54.908725 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03 is running failed: container process not found" containerID="bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 13:47:54 crc kubenswrapper[4793]: E0130 13:47:54.908834 4793 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-9t46g" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="registry-server" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.028889 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.029487 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.031215 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="extract-utilities" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.031237 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="extract-utilities" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.031257 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="registry-server" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.031265 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="registry-server" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.031280 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="extract-content" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.031287 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="extract-content" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.031300 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd400d07-c5a8-40c2-9c01-dab9908caf49" containerName="pruner" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.031308 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd400d07-c5a8-40c2-9c01-dab9908caf49" containerName="pruner" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.031434 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="registry-server" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.031450 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd400d07-c5a8-40c2-9c01-dab9908caf49" containerName="pruner" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036413 4793 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036459 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036655 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036666 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036679 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036684 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036695 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036701 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036709 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036714 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036721 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036727 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036736 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036742 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036809 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036827 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036839 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036845 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036971 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036981 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036991 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036999 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.037006 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.037015 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.037200 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.037329 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.038965 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6" gracePeriod=15 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.040031 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03" gracePeriod=15 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.040099 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01" gracePeriod=15 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.040143 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690" gracePeriod=15 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.040183 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995" gracePeriod=15 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.058675 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.066956 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-utilities\") pod \"551044e9-867a-4307-a28c-ea34bab39473\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.067085 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2blm\" (UniqueName: \"kubernetes.io/projected/551044e9-867a-4307-a28c-ea34bab39473-kube-api-access-b2blm\") pod \"551044e9-867a-4307-a28c-ea34bab39473\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.067131 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-catalog-content\") pod \"551044e9-867a-4307-a28c-ea34bab39473\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068376 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068637 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068663 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068682 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068729 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068756 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068796 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.075401 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-utilities" (OuterVolumeSpecName: "utilities") pod "551044e9-867a-4307-a28c-ea34bab39473" (UID: "551044e9-867a-4307-a28c-ea34bab39473"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.078297 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551044e9-867a-4307-a28c-ea34bab39473-kube-api-access-b2blm" (OuterVolumeSpecName: "kube-api-access-b2blm") pod "551044e9-867a-4307-a28c-ea34bab39473" (UID: "551044e9-867a-4307-a28c-ea34bab39473"). InnerVolumeSpecName "kube-api-access-b2blm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.178769 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.178987 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.179805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.179929 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180199 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180331 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180486 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180592 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180754 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2blm\" (UniqueName: \"kubernetes.io/projected/551044e9-867a-4307-a28c-ea34bab39473-kube-api-access-b2blm\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180853 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180978 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.179335 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.181205 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.179418 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.181429 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.181542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.181662 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.183470 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.198852 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.304898 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "551044e9-867a-4307-a28c-ea34bab39473" (UID: "551044e9-867a-4307-a28c-ea34bab39473"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.390734 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.392924 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-kvlgd.188f86567077f07d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-kvlgd,UID:08b55ba0-087d-42ec-a0c5-538f0a3c0987,APIVersion:v1,ResourceVersion:28524,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,LastTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.416591 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.736350 4793 generic.go:334] "Generic (PLEG): container finished" podID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" containerID="0618ff92ae5b40adca08a74a83a3ae1b7472aacf6d9f5ce203122d3b72de0111" exitCode=0 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.736463 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5","Type":"ContainerDied","Data":"0618ff92ae5b40adca08a74a83a3ae1b7472aacf6d9f5ce203122d3b72de0111"} Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.737262 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.737561 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.746234 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerStarted","Data":"3991b8c8da8221b7422f215779cd2c7fe6fecd1213e2421f8f1c4e3c851baccd"} Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.753514 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerStarted","Data":"0a9be6fb1fc0d8a14f1edca7b047f49698da2a9d4b0fc318118d31f74ad0506a"} Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.754358 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.754502 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.758117 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.773294 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.784374 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.785120 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03" exitCode=0 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.785137 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01" exitCode=0 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.785147 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690" exitCode=0 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.785154 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995" exitCode=2 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.785211 4793 scope.go:117] "RemoveContainer" containerID="da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.789801 4793 generic.go:334] "Generic (PLEG): container finished" podID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerID="a39b5636265cc040beb743a7d92b7de07f6a61cbb255d62d9adbf1ef86fd75b0" exitCode=0 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.789851 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvlgd" event={"ID":"08b55ba0-087d-42ec-a0c5-538f0a3c0987","Type":"ContainerDied","Data":"a39b5636265cc040beb743a7d92b7de07f6a61cbb255d62d9adbf1ef86fd75b0"} Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.791100 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.791352 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.791499 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.791658 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.795181 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bc1253a936a1fb130b3d0bd5a4a4e0faab053a8532b79f469a9186771a1ba586"} Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.798845 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerDied","Data":"2755c7eacfd017f81d392f7b77b2261e36a1e0f02e74ee8dd73cb61fa736268b"} Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.798930 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.799716 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.799863 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.800003 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.800160 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.800292 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.804314 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.821210 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.821502 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.821699 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.824693 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.849855 4793 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 13:48:04 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513" Netns:"/var/run/netns/df52414f-eebd-4743-9919-33beb0544a43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:04 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:04 crc kubenswrapper[4793]: > Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.849951 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 13:48:04 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513" Netns:"/var/run/netns/df52414f-eebd-4743-9919-33beb0544a43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:04 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:04 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.849967 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 13:48:04 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513" Netns:"/var/run/netns/df52414f-eebd-4743-9919-33beb0544a43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:04 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:04 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.850024 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513\\\" Netns:\\\"/var/run/netns/df52414f-eebd-4743-9919-33beb0544a43\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s\\\": dial tcp 38.102.83.2:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.021700 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:05Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:05Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:05Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:05Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1be9df9846a1afdcabb94b502538e28b99b6748cc22415f1be58ab4cb7a391b8\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:9f846e202c62c9de285e0af13de8057685dff0d285709f110f88725e10d32d82\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202160358},{\\\"names\\\":[],\\\"sizeBytes\\\":1186979061},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.022109 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.022469 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.022700 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.023038 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.023090 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.045433 4793 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 13:48:05 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21" Netns:"/var/run/netns/10ba0c7f-05ef-4afe-a856-e5d6da0edfca" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:05 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:05 crc kubenswrapper[4793]: > Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.045497 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 13:48:05 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21" Netns:"/var/run/netns/10ba0c7f-05ef-4afe-a856-e5d6da0edfca" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:05 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:05 crc kubenswrapper[4793]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.045523 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 13:48:05 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21" Netns:"/var/run/netns/10ba0c7f-05ef-4afe-a856-e5d6da0edfca" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:05 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:05 crc kubenswrapper[4793]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.045581 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21\\\" Netns:\\\"/var/run/netns/10ba0c7f-05ef-4afe-a856-e5d6da0edfca\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s\\\": dial tcp 38.102.83.2:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.049830 4793 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 13:48:05 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38" Netns:"/var/run/netns/ecdd22f0-0d26-4bd9-95c3-691dc891d81b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:05 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:05 crc kubenswrapper[4793]: > Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.049900 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 13:48:05 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38" Netns:"/var/run/netns/ecdd22f0-0d26-4bd9-95c3-691dc891d81b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:05 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:05 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.049925 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 13:48:05 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38" Netns:"/var/run/netns/ecdd22f0-0d26-4bd9-95c3-691dc891d81b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:05 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:05 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.049998 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38\\\" Netns:\\\"/var/run/netns/ecdd22f0-0d26-4bd9-95c3-691dc891d81b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s\\\": dial tcp 38.102.83.2:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.806266 4793 generic.go:334] "Generic (PLEG): container finished" podID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerID="7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028" exitCode=0 Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.806361 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn7sx" event={"ID":"96451b9c-e42f-43ae-9f62-bc830fa1ad9d","Type":"ContainerDied","Data":"7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.807334 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.807530 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.807830 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.808198 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.808368 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.808567 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.808690 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerStarted","Data":"0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.810030 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.810268 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.810533 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.810836 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.811330 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.811689 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.811704 4793 generic.go:334] "Generic (PLEG): container finished" podID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerID="3991b8c8da8221b7422f215779cd2c7fe6fecd1213e2421f8f1c4e3c851baccd" exitCode=0 Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.811770 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerDied","Data":"3991b8c8da8221b7422f215779cd2c7fe6fecd1213e2421f8f1c4e3c851baccd"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.811913 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.812230 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.812380 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.812611 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.812868 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.813087 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.813300 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.813515 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.813680 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.816913 4793 generic.go:334] "Generic (PLEG): container finished" podID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerID="0a9be6fb1fc0d8a14f1edca7b047f49698da2a9d4b0fc318118d31f74ad0506a" exitCode=0 Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.816967 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerDied","Data":"0a9be6fb1fc0d8a14f1edca7b047f49698da2a9d4b0fc318118d31f74ad0506a"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.818139 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.818605 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.818882 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.819074 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.819358 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.819799 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.820042 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.820268 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.820616 4793 generic.go:334] "Generic (PLEG): container finished" podID="02ec4db2-0283-437a-999f-d50a10ab046c" containerID="b9519a38e06d14f0b9522f2ca7c944b5d849d5137311c5fba903cacfaefb9b67" exitCode=0 Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.820673 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vzj" event={"ID":"02ec4db2-0283-437a-999f-d50a10ab046c","Type":"ContainerDied","Data":"b9519a38e06d14f0b9522f2ca7c944b5d849d5137311c5fba903cacfaefb9b67"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.821482 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.821779 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.821985 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.823248 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.823510 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.823878 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.824268 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.824554 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.824983 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.825475 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerStarted","Data":"17de5c4fa1f8a1615ce34e313bf58b61c0d69abdba7886409d1567e3fa60d503"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.826111 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.827763 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.828266 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.829804 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.830137 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.830337 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.830426 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.830556 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.830862 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.831181 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.831534 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.987641 4793 scope.go:117] "RemoveContainer" containerID="bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.003542 4793 scope.go:117] "RemoveContainer" containerID="8badd89e5ba818e3190858ac0610210fba8c0135f1eed3a6d67ab9234d8a776d" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.062703 4793 scope.go:117] "RemoveContainer" containerID="ad13ab2dd584826367febbb63bb47fc2488d332ee67905dd6b329b48680fd011" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.078872 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.079337 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.079493 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.079751 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080152 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080329 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080464 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080601 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080735 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080867 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080998 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.119359 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-var-lock\") pod \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.119429 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kube-api-access\") pod \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.119452 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kubelet-dir\") pod \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.119819 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" (UID: "fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.119857 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-var-lock" (OuterVolumeSpecName: "var-lock") pod "fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" (UID: "fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.125331 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" (UID: "fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.223981 4793 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.224009 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.224018 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.841433 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.845762 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5","Type":"ContainerDied","Data":"e8e047f8a8f147431c44c82ab17ef01b1add23ce519a6f0480d69181bc2cb61e"} Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.845810 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e047f8a8f147431c44c82ab17ef01b1add23ce519a6f0480d69181bc2cb61e" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.845785 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.846602 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.846767 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.846938 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.847495 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.847828 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.847996 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.848222 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.848434 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.848606 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.848775 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.853259 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.853633 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.854121 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.854495 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.854829 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.855130 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.855393 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.855625 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.855889 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.856198 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.852188 4793 generic.go:334] "Generic (PLEG): container finished" podID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerID="0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d" exitCode=0 Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.852273 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerDied","Data":"0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d"} Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.853824 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.854617 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.855021 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.855739 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.856992 4793 generic.go:334] "Generic (PLEG): container finished" podID="89a43c58-d327-429a-96cd-9f9f5393368a" containerID="17de5c4fa1f8a1615ce34e313bf58b61c0d69abdba7886409d1567e3fa60d503" exitCode=0 Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.857062 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerDied","Data":"17de5c4fa1f8a1615ce34e313bf58b61c0d69abdba7886409d1567e3fa60d503"} Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.857155 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.858470 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.860207 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.860509 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.860740 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.860993 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.861434 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.862108 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.862140 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.862328 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.862584 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.862768 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6" exitCode=0 Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.862856 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.863106 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.863366 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.863574 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.863757 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.863920 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.116922 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.117471 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.117934 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.118290 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.118596 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:08 crc kubenswrapper[4793]: I0130 13:48:08.118630 4793 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.118912 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="200ms" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.320432 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="400ms" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.721578 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="800ms" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.206697 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.207454 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.208290 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.208579 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.208916 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.209227 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.209573 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.209910 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.210322 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.210625 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.210951 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.211281 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.211520 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262089 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262156 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262207 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262249 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262396 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262763 4793 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262783 4793 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262792 4793 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:09 crc kubenswrapper[4793]: E0130 13:48:09.522670 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="1.6s" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.875653 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.876502 4793 scope.go:117] "RemoveContainer" containerID="233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.876649 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.892459 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.893442 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.893850 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.894124 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.894476 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.894844 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.895199 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.895423 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.895795 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.896088 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.896383 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.896939 4793 scope.go:117] "RemoveContainer" containerID="a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.909686 4793 scope.go:117] "RemoveContainer" containerID="ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.920752 4793 scope.go:117] "RemoveContainer" containerID="a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.934411 4793 scope.go:117] "RemoveContainer" containerID="bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.951109 4793 scope.go:117] "RemoveContainer" containerID="e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.079407 4793 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 13:48:10 crc kubenswrapper[4793]: E0130 13:48:10.336897 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-kvlgd.188f86567077f07d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-kvlgd,UID:08b55ba0-087d-42ec-a0c5-538f0a3c0987,APIVersion:v1,ResourceVersion:28524,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,LastTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.400958 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.401308 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.415725 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.415965 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.416216 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.416645 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.416943 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.417351 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.417659 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.417844 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.418093 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.421605 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 13:48:11 crc kubenswrapper[4793]: E0130 13:48:11.124891 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="3.2s" Jan 30 13:48:14 crc kubenswrapper[4793]: E0130 13:48:14.326418 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="6.4s" Jan 30 13:48:14 crc kubenswrapper[4793]: I0130 13:48:14.904585 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvlgd" event={"ID":"08b55ba0-087d-42ec-a0c5-538f0a3c0987","Type":"ContainerStarted","Data":"539c3853e42d9d22bfa167a67e472131adad4bd97a97c725d04b9f2fb5b89b55"} Jan 30 13:48:15 crc kubenswrapper[4793]: E0130 13:48:15.247896 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:15Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1be9df9846a1afdcabb94b502538e28b99b6748cc22415f1be58ab4cb7a391b8\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:9f846e202c62c9de285e0af13de8057685dff0d285709f110f88725e10d32d82\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202160358},{\\\"names\\\":[],\\\"sizeBytes\\\":1186979061},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: E0130 13:48:15.248400 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: E0130 13:48:15.248682 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: E0130 13:48:15.248918 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: E0130 13:48:15.249213 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: E0130 13:48:15.249243 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.911255 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.912446 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.912723 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.912942 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.913162 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.913352 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.913544 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.913736 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.913928 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.914140 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.538332 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.538380 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.594511 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.595122 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.595772 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.595981 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.596198 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.596387 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.596567 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.596738 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.596907 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.597092 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.597317 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.928177 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.928474 4793 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410" exitCode=1 Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.928505 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410"} Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.928990 4793 scope.go:117] "RemoveContainer" containerID="f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.929465 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.930586 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.930938 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.931199 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.931406 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.931606 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.931805 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.932190 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.932482 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.932945 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.933237 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.178236 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.397753 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.397780 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.397877 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.397770 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.398299 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.398394 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.398539 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.399467 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.400424 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.400638 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.400998 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.401443 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.401969 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.402412 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.406779 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.407357 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.407769 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.409142 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.419942 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.419974 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:19 crc kubenswrapper[4793]: E0130 13:48:19.420446 4793 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.421189 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:20 crc kubenswrapper[4793]: E0130 13:48:20.338781 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-kvlgd.188f86567077f07d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-kvlgd,UID:08b55ba0-087d-42ec-a0c5-538f0a3c0987,APIVersion:v1,ResourceVersion:28524,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,LastTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.403633 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.404468 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.404889 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.405161 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.405551 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.405983 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.406451 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.406822 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.407106 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.407477 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.407879 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.408143 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: E0130 13:48:20.727380 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="7s" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.871970 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:48:25 crc kubenswrapper[4793]: E0130 13:48:25.396856 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:25Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:25Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:25Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:25Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1be9df9846a1afdcabb94b502538e28b99b6748cc22415f1be58ab4cb7a391b8\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:9f846e202c62c9de285e0af13de8057685dff0d285709f110f88725e10d32d82\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202160358},{\\\"names\\\":[],\\\"sizeBytes\\\":1186979061},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:25 crc kubenswrapper[4793]: E0130 13:48:25.397481 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:25 crc kubenswrapper[4793]: E0130 13:48:25.397794 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:25 crc kubenswrapper[4793]: E0130 13:48:25.398034 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:25 crc kubenswrapper[4793]: E0130 13:48:25.398269 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:25 crc kubenswrapper[4793]: E0130 13:48:25.398295 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.587207 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.588465 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.588936 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.589516 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.589772 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.590014 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.590365 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.590736 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.591009 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.591335 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.591675 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.591932 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.592234 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.889582 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:48:27 crc kubenswrapper[4793]: E0130 13:48:27.729316 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="7s" Jan 30 13:48:30 crc kubenswrapper[4793]: E0130 13:48:30.340770 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-kvlgd.188f86567077f07d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-kvlgd,UID:08b55ba0-087d-42ec-a0c5-538f0a3c0987,APIVersion:v1,ResourceVersion:28524,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,LastTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.402338 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.402884 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.403973 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.404784 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.405815 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.406496 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.406828 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.407255 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.407552 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.407812 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.408106 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.408369 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:34 crc kubenswrapper[4793]: E0130 13:48:34.730562 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="7s" Jan 30 13:48:35 crc kubenswrapper[4793]: E0130 13:48:35.784196 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:35Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:35Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:35Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:35Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1be9df9846a1afdcabb94b502538e28b99b6748cc22415f1be58ab4cb7a391b8\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:9f846e202c62c9de285e0af13de8057685dff0d285709f110f88725e10d32d82\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202160358},{\\\"names\\\":[],\\\"sizeBytes\\\":1186979061},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:35 crc kubenswrapper[4793]: E0130 13:48:35.785926 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:35 crc kubenswrapper[4793]: E0130 13:48:35.786414 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:35 crc kubenswrapper[4793]: E0130 13:48:35.786663 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:35 crc kubenswrapper[4793]: E0130 13:48:35.786971 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:35 crc kubenswrapper[4793]: E0130 13:48:35.787090 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:48:40 crc kubenswrapper[4793]: E0130 13:48:40.342529 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-kvlgd.188f86567077f07d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-kvlgd,UID:08b55ba0-087d-42ec-a0c5-538f0a3c0987,APIVersion:v1,ResourceVersion:28524,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,LastTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.400288 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.400833 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.401470 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.401882 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.402525 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.403142 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.405464 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.406114 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.407241 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.407526 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.407783 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.408082 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:41 crc kubenswrapper[4793]: E0130 13:48:41.732529 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="7s" Jan 30 13:48:45 crc kubenswrapper[4793]: E0130 13:48:45.880008 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:45Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:45Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:45Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:45Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1be9df9846a1afdcabb94b502538e28b99b6748cc22415f1be58ab4cb7a391b8\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:9f846e202c62c9de285e0af13de8057685dff0d285709f110f88725e10d32d82\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202160358},{\\\"names\\\":[],\\\"sizeBytes\\\":1186979061},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:45 crc kubenswrapper[4793]: E0130 13:48:45.880856 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:45 crc kubenswrapper[4793]: E0130 13:48:45.881117 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:45 crc kubenswrapper[4793]: E0130 13:48:45.881382 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:45 crc kubenswrapper[4793]: E0130 13:48:45.881671 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:45 crc kubenswrapper[4793]: E0130 13:48:45.881689 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.589099 4793 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 13:48:46 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11" Netns:"/var/run/netns/2f1be3ea-cce3-4fc0-9c88-27527a0cb39d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:46 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:46 crc kubenswrapper[4793]: > Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.589341 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 13:48:46 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11" Netns:"/var/run/netns/2f1be3ea-cce3-4fc0-9c88-27527a0cb39d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:46 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:46 crc kubenswrapper[4793]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.589363 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 13:48:46 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11" Netns:"/var/run/netns/2f1be3ea-cce3-4fc0-9c88-27527a0cb39d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:46 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:46 crc kubenswrapper[4793]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.589429 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11\\\" Netns:\\\"/var/run/netns/2f1be3ea-cce3-4fc0-9c88-27527a0cb39d\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s\\\": dial tcp 38.102.83.2:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.599415 4793 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 13:48:46 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044" Netns:"/var/run/netns/0ab308dc-b6eb-4831-a897-abd8bc6df026" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:46 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:46 crc kubenswrapper[4793]: > Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.599481 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 13:48:46 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044" Netns:"/var/run/netns/0ab308dc-b6eb-4831-a897-abd8bc6df026" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:46 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:46 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.600183 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 13:48:46 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044" Netns:"/var/run/netns/0ab308dc-b6eb-4831-a897-abd8bc6df026" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:46 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:46 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.600311 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044\\\" Netns:\\\"/var/run/netns/0ab308dc-b6eb-4831-a897-abd8bc6df026\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s\\\": dial tcp 38.102.83.2:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.096186 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f4b66b3a3b80510bb6d511455d0313195b10051500368abcf54792dd82c05a59"} Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.098489 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.098782 4793 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3" exitCode=1 Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.098808 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3"} Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.099368 4793 scope.go:117] "RemoveContainer" containerID="16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.099641 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.099910 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.100327 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.100508 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.100767 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.101216 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.101462 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.101639 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.101786 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.101928 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.102080 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.102221 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.102354 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: E0130 13:48:47.145621 4793 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 13:48:47 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5" Netns:"/var/run/netns/5ea5e4f3-80af-41aa-8f63-5bc42bc08ffc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:47 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:47 crc kubenswrapper[4793]: > Jan 30 13:48:47 crc kubenswrapper[4793]: E0130 13:48:47.145695 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 13:48:47 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5" Netns:"/var/run/netns/5ea5e4f3-80af-41aa-8f63-5bc42bc08ffc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:47 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:47 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:48:47 crc kubenswrapper[4793]: E0130 13:48:47.145717 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 13:48:47 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5" Netns:"/var/run/netns/5ea5e4f3-80af-41aa-8f63-5bc42bc08ffc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:47 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:47 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:48:47 crc kubenswrapper[4793]: E0130 13:48:47.145775 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5\\\" Netns:\\\"/var/run/netns/5ea5e4f3-80af-41aa-8f63-5bc42bc08ffc\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s\\\": dial tcp 38.102.83.2:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:48:48 crc kubenswrapper[4793]: E0130 13:48:48.733124 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="7s" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.112409 4793 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6506fc668bb4ba3d37719afb4aa45245679057c496c260396a0681c5eb1ab5fd" exitCode=0 Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.112546 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6506fc668bb4ba3d37719afb4aa45245679057c496c260396a0681c5eb1ab5fd"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.112905 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.112922 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:49 crc kubenswrapper[4793]: E0130 13:48:49.113343 4793 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.113369 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.113562 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.113758 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.114011 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.114231 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.114405 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.114578 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.114755 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.114928 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.115134 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.115314 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.115492 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.115668 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.120919 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerStarted","Data":"7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.121801 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.122029 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.122302 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.122679 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.123102 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.123263 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vzj" event={"ID":"02ec4db2-0283-437a-999f-d50a10ab046c","Type":"ContainerStarted","Data":"bca1d232355315db4731f9a23c3d510cb5c3560c5a03542708615d5cdb216d6c"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.123555 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.123809 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.124389 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.124767 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.125155 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.125573 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.125847 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.126083 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.126371 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.126550 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.126705 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.126843 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127007 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127180 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127319 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127497 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127679 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127830 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127971 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.128205 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.128435 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.128765 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerStarted","Data":"04cab8777968c78ddbe77df944f0557b099be348daaec3a0b9ff7c7f4c0c511b"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.130111 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.130372 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.130671 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.130988 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.131262 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.131563 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.131804 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.132066 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.132351 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.132561 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.132847 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.133195 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.133321 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerStarted","Data":"393188ba22f128de9c0a011df4faebd2b1d1eb0a5b1ea461fc46bcc26c5a26e1"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.133400 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.133888 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.134127 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.134338 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.134639 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.135093 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.135315 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.135554 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.135845 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.136160 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.136246 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8d898ce2eb670ce9a98146f45c2c9134c0399865527e45c0963a3df7613fb855"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.136261 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.136555 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.138737 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.139164 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.139419 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.139865 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.140202 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.140522 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.140921 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.141037 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.141934 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.142057 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3f8251d8cc4d16af4a648c0de85dc3b7067c45868ed41fc506bb343a45b0bfda"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.142176 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.142469 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.142705 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.142973 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.143430 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.143775 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.144013 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.144317 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.144616 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.144806 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.144992 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.145264 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.145446 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.145623 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.145767 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.145906 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.146069 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.146414 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.146627 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.146784 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.146922 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.148433 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn7sx" event={"ID":"96451b9c-e42f-43ae-9f62-bc830fa1ad9d","Type":"ContainerStarted","Data":"6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.151216 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.151594 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.151933 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.152856 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.155269 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.155689 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.155925 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.156257 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.156418 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.156573 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.158577 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerStarted","Data":"84cd655416136fa3e73cac54a43941e805b3e648275563df361a78561fee0a01"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.175516 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.177446 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.177499 4793 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.177525 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.193554 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.213898 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.233760 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.256700 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.274278 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.294503 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.313606 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.334008 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.354249 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.374191 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.393970 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.413868 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.433766 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.454341 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.473928 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:50 crc kubenswrapper[4793]: I0130 13:48:50.166133 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0530a3b6a8c1fa539f47b2b61219189174a05eda145a7977d3139dafc2f5fabc"} Jan 30 13:48:51 crc kubenswrapper[4793]: I0130 13:48:51.172895 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f2ce507d8896c9a4147fd15d2195cc8386fc0c107e2d3da6dc6b3afd7cf3a5aa"} Jan 30 13:48:53 crc kubenswrapper[4793]: I0130 13:48:53.186108 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7d58f2970981102c5de1327291e81f27036a6711b7e3ce61eeef1bc8ce66569b"} Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.192721 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f67021520d79c4b475c79918229753abd870a84a3bc800d01f5ee27b3e04943d"} Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.382219 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.382854 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.423583 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.555603 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.555683 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.595361 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.694014 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.694079 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.738981 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:48:55 crc kubenswrapper[4793]: I0130 13:48:55.201429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a393268ebfc4150bb652a680cc053a55806d9cef1ed7d3ab4cdeee748f359c1f"} Jan 30 13:48:55 crc kubenswrapper[4793]: I0130 13:48:55.242077 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:48:55 crc kubenswrapper[4793]: I0130 13:48:55.244101 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:48:55 crc kubenswrapper[4793]: I0130 13:48:55.247710 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.218011 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.218315 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.218006 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.224377 4793 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.336514 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1f103c53-b7d9-4380-8d74-173d7a2fafbf" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.889793 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.915928 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.916160 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.968515 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.224452 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.224507 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.228702 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1f103c53-b7d9-4380-8d74-173d7a2fafbf" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.264489 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.397603 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.397970 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.457176 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.457214 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.515414 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:48:57 crc kubenswrapper[4793]: W0130 13:48:57.803293 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-70f1d253e6607cd5633d90e0b93c6f7667e68969b0899a190ae06ce3a39ece47 WatchSource:0}: Error finding container 70f1d253e6607cd5633d90e0b93c6f7667e68969b0899a190ae06ce3a39ece47: Status 404 returned error can't find the container with id 70f1d253e6607cd5633d90e0b93c6f7667e68969b0899a190ae06ce3a39ece47 Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.849712 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.850159 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.898238 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:48:58 crc kubenswrapper[4793]: I0130 13:48:58.231450 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a92c44723e724fe3d77b0711ba4590782cf5ceec156d6f06ef0f99d1495d7a42"} Jan 30 13:48:58 crc kubenswrapper[4793]: I0130 13:48:58.231523 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"70f1d253e6607cd5633d90e0b93c6f7667e68969b0899a190ae06ce3a39ece47"} Jan 30 13:48:58 crc kubenswrapper[4793]: I0130 13:48:58.270624 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:48:58 crc kubenswrapper[4793]: I0130 13:48:58.270688 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:48:59 crc kubenswrapper[4793]: I0130 13:48:59.177642 4793 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 13:48:59 crc kubenswrapper[4793]: I0130 13:48:59.177708 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 13:49:00 crc kubenswrapper[4793]: I0130 13:49:00.398319 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:49:00 crc kubenswrapper[4793]: I0130 13:49:00.413524 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:49:00 crc kubenswrapper[4793]: W0130 13:49:00.822424 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-38918ddcee4170314db0cbff959bdef64b07c64dbf2b932b651ab2d65bf442e6 WatchSource:0}: Error finding container 38918ddcee4170314db0cbff959bdef64b07c64dbf2b932b651ab2d65bf442e6: Status 404 returned error can't find the container with id 38918ddcee4170314db0cbff959bdef64b07c64dbf2b932b651ab2d65bf442e6 Jan 30 13:49:01 crc kubenswrapper[4793]: I0130 13:49:01.248357 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8eba692c15f68d62b578428268f61df3278798e68263d7e8a86a6d5171ccf708"} Jan 30 13:49:01 crc kubenswrapper[4793]: I0130 13:49:01.248656 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"38918ddcee4170314db0cbff959bdef64b07c64dbf2b932b651ab2d65bf442e6"} Jan 30 13:49:02 crc kubenswrapper[4793]: I0130 13:49:02.397336 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:49:02 crc kubenswrapper[4793]: I0130 13:49:02.397889 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:49:02 crc kubenswrapper[4793]: W0130 13:49:02.657749 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-d9696d28ff00eb6e9ce606e0dca01b21ee6c773b6487c732432a922adfd8b9c2 WatchSource:0}: Error finding container d9696d28ff00eb6e9ce606e0dca01b21ee6c773b6487c732432a922adfd8b9c2: Status 404 returned error can't find the container with id d9696d28ff00eb6e9ce606e0dca01b21ee6c773b6487c732432a922adfd8b9c2 Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.270709 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.271064 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="8eba692c15f68d62b578428268f61df3278798e68263d7e8a86a6d5171ccf708" exitCode=255 Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.271132 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"8eba692c15f68d62b578428268f61df3278798e68263d7e8a86a6d5171ccf708"} Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.271790 4793 scope.go:117] "RemoveContainer" containerID="8eba692c15f68d62b578428268f61df3278798e68263d7e8a86a6d5171ccf708" Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.273948 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"85a0bd544390d7ba5f391d36b711e3b22bf82d73434a81c8cd5186feadb231d6"} Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.273999 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d9696d28ff00eb6e9ce606e0dca01b21ee6c773b6487c732432a922adfd8b9c2"} Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.274451 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:49:04 crc kubenswrapper[4793]: I0130 13:49:04.283606 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 30 13:49:04 crc kubenswrapper[4793]: I0130 13:49:04.284908 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 30 13:49:04 crc kubenswrapper[4793]: I0130 13:49:04.284973 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="18c87fd30c2aa4f43e2df67f6ee4c2f95073809e41963cbaef782a613a8fbc2e" exitCode=255 Jan 30 13:49:04 crc kubenswrapper[4793]: I0130 13:49:04.285156 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"18c87fd30c2aa4f43e2df67f6ee4c2f95073809e41963cbaef782a613a8fbc2e"} Jan 30 13:49:04 crc kubenswrapper[4793]: I0130 13:49:04.285230 4793 scope.go:117] "RemoveContainer" containerID="8eba692c15f68d62b578428268f61df3278798e68263d7e8a86a6d5171ccf708" Jan 30 13:49:04 crc kubenswrapper[4793]: I0130 13:49:04.285717 4793 scope.go:117] "RemoveContainer" containerID="18c87fd30c2aa4f43e2df67f6ee4c2f95073809e41963cbaef782a613a8fbc2e" Jan 30 13:49:04 crc kubenswrapper[4793]: E0130 13:49:04.286181 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:49:05 crc kubenswrapper[4793]: I0130 13:49:05.295761 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 30 13:49:05 crc kubenswrapper[4793]: I0130 13:49:05.746467 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:49:05 crc kubenswrapper[4793]: I0130 13:49:05.746471 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:49:05 crc kubenswrapper[4793]: I0130 13:49:05.746520 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:49:05 crc kubenswrapper[4793]: I0130 13:49:05.746594 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:49:06 crc kubenswrapper[4793]: I0130 13:49:06.303776 4793 generic.go:334] "Generic (PLEG): container finished" podID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerID="e83f7454337f430495faf606622a60c225aa40f81a53c0c6d2b0f496da168c9b" exitCode=0 Jan 30 13:49:06 crc kubenswrapper[4793]: I0130 13:49:06.303858 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerDied","Data":"e83f7454337f430495faf606622a60c225aa40f81a53c0c6d2b0f496da168c9b"} Jan 30 13:49:06 crc kubenswrapper[4793]: I0130 13:49:06.304512 4793 scope.go:117] "RemoveContainer" containerID="e83f7454337f430495faf606622a60c225aa40f81a53c0c6d2b0f496da168c9b" Jan 30 13:49:07 crc kubenswrapper[4793]: I0130 13:49:07.309944 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/1.log" Jan 30 13:49:07 crc kubenswrapper[4793]: I0130 13:49:07.310382 4793 generic.go:334] "Generic (PLEG): container finished" podID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerID="5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae" exitCode=1 Jan 30 13:49:07 crc kubenswrapper[4793]: I0130 13:49:07.310409 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerDied","Data":"5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae"} Jan 30 13:49:07 crc kubenswrapper[4793]: I0130 13:49:07.310442 4793 scope.go:117] "RemoveContainer" containerID="e83f7454337f430495faf606622a60c225aa40f81a53c0c6d2b0f496da168c9b" Jan 30 13:49:07 crc kubenswrapper[4793]: I0130 13:49:07.310961 4793 scope.go:117] "RemoveContainer" containerID="5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae" Jan 30 13:49:07 crc kubenswrapper[4793]: E0130 13:49:07.311253 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:08 crc kubenswrapper[4793]: I0130 13:49:08.316770 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/1.log" Jan 30 13:49:09 crc kubenswrapper[4793]: I0130 13:49:09.181104 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:49:09 crc kubenswrapper[4793]: I0130 13:49:09.185772 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:49:15 crc kubenswrapper[4793]: I0130 13:49:15.745450 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:49:15 crc kubenswrapper[4793]: I0130 13:49:15.746303 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:49:15 crc kubenswrapper[4793]: I0130 13:49:15.746405 4793 scope.go:117] "RemoveContainer" containerID="5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae" Jan 30 13:49:15 crc kubenswrapper[4793]: E0130 13:49:15.746651 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:16 crc kubenswrapper[4793]: I0130 13:49:16.357257 4793 scope.go:117] "RemoveContainer" containerID="5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae" Jan 30 13:49:16 crc kubenswrapper[4793]: E0130 13:49:16.357440 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:18 crc kubenswrapper[4793]: I0130 13:49:18.398551 4793 scope.go:117] "RemoveContainer" containerID="18c87fd30c2aa4f43e2df67f6ee4c2f95073809e41963cbaef782a613a8fbc2e" Jan 30 13:49:19 crc kubenswrapper[4793]: I0130 13:49:19.374101 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 30 13:49:19 crc kubenswrapper[4793]: I0130 13:49:19.374167 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e"} Jan 30 13:49:20 crc kubenswrapper[4793]: I0130 13:49:20.382229 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Jan 30 13:49:20 crc kubenswrapper[4793]: I0130 13:49:20.382997 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 30 13:49:20 crc kubenswrapper[4793]: I0130 13:49:20.383043 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e" exitCode=255 Jan 30 13:49:20 crc kubenswrapper[4793]: I0130 13:49:20.383160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e"} Jan 30 13:49:20 crc kubenswrapper[4793]: I0130 13:49:20.383213 4793 scope.go:117] "RemoveContainer" containerID="18c87fd30c2aa4f43e2df67f6ee4c2f95073809e41963cbaef782a613a8fbc2e" Jan 30 13:49:20 crc kubenswrapper[4793]: I0130 13:49:20.383705 4793 scope.go:117] "RemoveContainer" containerID="459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e" Jan 30 13:49:20 crc kubenswrapper[4793]: E0130 13:49:20.383900 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:49:21 crc kubenswrapper[4793]: I0130 13:49:21.388894 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Jan 30 13:49:24 crc kubenswrapper[4793]: I0130 13:49:24.243312 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 13:49:27 crc kubenswrapper[4793]: I0130 13:49:27.202329 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 13:49:28 crc kubenswrapper[4793]: I0130 13:49:28.398258 4793 scope.go:117] "RemoveContainer" containerID="5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae" Jan 30 13:49:28 crc kubenswrapper[4793]: I0130 13:49:28.499483 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.104032 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.438563 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/2.log" Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.439167 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/1.log" Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.439217 4793 generic.go:334] "Generic (PLEG): container finished" podID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" exitCode=1 Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.439246 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerDied","Data":"63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c"} Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.439278 4793 scope.go:117] "RemoveContainer" containerID="5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae" Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.439776 4793 scope.go:117] "RemoveContainer" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" Jan 30 13:49:29 crc kubenswrapper[4793]: E0130 13:49:29.440008 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:30 crc kubenswrapper[4793]: I0130 13:49:30.455172 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/2.log" Jan 30 13:49:33 crc kubenswrapper[4793]: I0130 13:49:33.277193 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 13:49:33 crc kubenswrapper[4793]: I0130 13:49:33.398446 4793 scope.go:117] "RemoveContainer" containerID="459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e" Jan 30 13:49:33 crc kubenswrapper[4793]: E0130 13:49:33.398912 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:49:33 crc kubenswrapper[4793]: I0130 13:49:33.446501 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 13:49:33 crc kubenswrapper[4793]: I0130 13:49:33.486934 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 13:49:33 crc kubenswrapper[4793]: I0130 13:49:33.538071 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:49:33 crc kubenswrapper[4793]: I0130 13:49:33.812516 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 13:49:34 crc kubenswrapper[4793]: I0130 13:49:34.037013 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 13:49:34 crc kubenswrapper[4793]: I0130 13:49:34.496802 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 13:49:34 crc kubenswrapper[4793]: I0130 13:49:34.610634 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 13:49:35 crc kubenswrapper[4793]: I0130 13:49:35.745429 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:49:35 crc kubenswrapper[4793]: I0130 13:49:35.746223 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:49:35 crc kubenswrapper[4793]: I0130 13:49:35.746732 4793 scope.go:117] "RemoveContainer" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" Jan 30 13:49:35 crc kubenswrapper[4793]: E0130 13:49:35.747121 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:36 crc kubenswrapper[4793]: I0130 13:49:36.487596 4793 scope.go:117] "RemoveContainer" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" Jan 30 13:49:36 crc kubenswrapper[4793]: E0130 13:49:36.488024 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:38 crc kubenswrapper[4793]: I0130 13:49:38.553215 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 13:49:40 crc kubenswrapper[4793]: I0130 13:49:40.199188 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 13:49:41 crc kubenswrapper[4793]: I0130 13:49:41.700628 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 13:49:41 crc kubenswrapper[4793]: I0130 13:49:41.876982 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 13:49:41 crc kubenswrapper[4793]: I0130 13:49:41.994627 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 13:49:42 crc kubenswrapper[4793]: I0130 13:49:42.413356 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:49:42 crc kubenswrapper[4793]: I0130 13:49:42.413416 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:49:43 crc kubenswrapper[4793]: I0130 13:49:43.078989 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 13:49:44 crc kubenswrapper[4793]: I0130 13:49:44.201439 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:49:44 crc kubenswrapper[4793]: I0130 13:49:44.379585 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 13:49:44 crc kubenswrapper[4793]: I0130 13:49:44.557904 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 13:49:44 crc kubenswrapper[4793]: I0130 13:49:44.566750 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 13:49:45 crc kubenswrapper[4793]: I0130 13:49:45.246900 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 13:49:45 crc kubenswrapper[4793]: I0130 13:49:45.397898 4793 scope.go:117] "RemoveContainer" containerID="459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.029276 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.389504 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.517837 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.538196 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/3.log" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.538896 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.539077 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68" exitCode=255 Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.539152 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68"} Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.539341 4793 scope.go:117] "RemoveContainer" containerID="459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.540236 4793 scope.go:117] "RemoveContainer" containerID="598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68" Jan 30 13:49:46 crc kubenswrapper[4793]: E0130 13:49:46.540665 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:49:47 crc kubenswrapper[4793]: I0130 13:49:47.081647 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 13:49:47 crc kubenswrapper[4793]: I0130 13:49:47.545716 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/3.log" Jan 30 13:49:48 crc kubenswrapper[4793]: I0130 13:49:48.179716 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 13:49:48 crc kubenswrapper[4793]: I0130 13:49:48.397792 4793 scope.go:117] "RemoveContainer" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" Jan 30 13:49:48 crc kubenswrapper[4793]: E0130 13:49:48.397984 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:49 crc kubenswrapper[4793]: I0130 13:49:49.533589 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 13:49:49 crc kubenswrapper[4793]: I0130 13:49:49.621670 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 13:49:49 crc kubenswrapper[4793]: I0130 13:49:49.650174 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 13:49:49 crc kubenswrapper[4793]: I0130 13:49:49.923357 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 13:49:49 crc kubenswrapper[4793]: I0130 13:49:49.982545 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 13:49:50 crc kubenswrapper[4793]: I0130 13:49:50.226966 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 13:49:50 crc kubenswrapper[4793]: I0130 13:49:50.768684 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 13:49:51 crc kubenswrapper[4793]: I0130 13:49:51.126663 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 13:49:51 crc kubenswrapper[4793]: I0130 13:49:51.305865 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 13:49:51 crc kubenswrapper[4793]: I0130 13:49:51.369817 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 13:49:51 crc kubenswrapper[4793]: I0130 13:49:51.528950 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.045464 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.159728 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.168481 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.287833 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.378532 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.470977 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.649876 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.002988 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.164957 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.262913 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.807706 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.819560 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.844551 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.863767 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.962703 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 13:49:54 crc kubenswrapper[4793]: I0130 13:49:54.096391 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:49:54 crc kubenswrapper[4793]: I0130 13:49:54.214178 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 13:49:54 crc kubenswrapper[4793]: I0130 13:49:54.215626 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 13:49:54 crc kubenswrapper[4793]: I0130 13:49:54.381196 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.055966 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.135486 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.307004 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.377178 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.394394 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.829290 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.881041 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.014957 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.052077 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.167739 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.210605 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.254533 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.589497 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.691578 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.905753 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.053042 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.057568 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.124668 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.328730 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.452202 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.789617 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.963297 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.985961 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.478836 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.489667 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.605595 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vqxml_10c05bcf-ffb2-4175-b323-067804ea3391/control-plane-machine-set-operator/0.log" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.605642 4793 generic.go:334] "Generic (PLEG): container finished" podID="10c05bcf-ffb2-4175-b323-067804ea3391" containerID="212528f818185ed34c08690d1751b643e849af81e53c1991d8ea6a0b53521695" exitCode=1 Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.605683 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" event={"ID":"10c05bcf-ffb2-4175-b323-067804ea3391","Type":"ContainerDied","Data":"212528f818185ed34c08690d1751b643e849af81e53c1991d8ea6a0b53521695"} Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.606201 4793 scope.go:117] "RemoveContainer" containerID="212528f818185ed34c08690d1751b643e849af81e53c1991d8ea6a0b53521695" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.684513 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.833877 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.909640 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.149521 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.250683 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.275034 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.314089 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.359553 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.377810 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.537108 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.616522 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vqxml_10c05bcf-ffb2-4175-b323-067804ea3391/control-plane-machine-set-operator/0.log" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.616578 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" event={"ID":"10c05bcf-ffb2-4175-b323-067804ea3391","Type":"ContainerStarted","Data":"b05360624036ea9bd7a9da009b7bb2eef5dfd51728acb5243e4acc994916b054"} Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.749124 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 13:50:00 crc kubenswrapper[4793]: I0130 13:50:00.054766 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 13:50:00 crc kubenswrapper[4793]: I0130 13:50:00.407074 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 13:50:00 crc kubenswrapper[4793]: I0130 13:50:00.408499 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 13:50:00 crc kubenswrapper[4793]: I0130 13:50:00.696022 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 13:50:00 crc kubenswrapper[4793]: I0130 13:50:00.818541 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 13:50:00 crc kubenswrapper[4793]: I0130 13:50:00.980470 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.173288 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.215156 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.369933 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.398272 4793 scope.go:117] "RemoveContainer" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.398566 4793 scope.go:117] "RemoveContainer" containerID="598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68" Jan 30 13:50:01 crc kubenswrapper[4793]: E0130 13:50:01.399076 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.428520 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.627608 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/2.log" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.627945 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerStarted","Data":"010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18"} Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.628393 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.630012 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.630078 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.757708 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.806488 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.924782 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.991667 4793 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.003243 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.030264 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.055474 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.166021 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.370612 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.399218 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.399247 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.403790 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1f103c53-b7d9-4380-8d74-173d7a2fafbf" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.405716 4793 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://a393268ebfc4150bb652a680cc053a55806d9cef1ed7d3ab4cdeee748f359c1f" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.405841 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.515420 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.635095 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/3.log" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.635631 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/2.log" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.635687 4793 generic.go:334] "Generic (PLEG): container finished" podID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" exitCode=1 Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.635827 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerDied","Data":"010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18"} Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.635869 4793 scope.go:117] "RemoveContainer" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.636263 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:50:02 crc kubenswrapper[4793]: E0130 13:50:02.636451 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.636626 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.636648 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.655114 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1f103c53-b7d9-4380-8d74-173d7a2fafbf" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.695792 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.728519 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.745215 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.780418 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.830076 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.252369 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.386575 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.394251 4793 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.546654 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.642568 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/3.log" Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.643290 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:50:03 crc kubenswrapper[4793]: E0130 13:50:03.643543 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.980970 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 13:50:04 crc kubenswrapper[4793]: I0130 13:50:04.061741 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 13:50:04 crc kubenswrapper[4793]: I0130 13:50:04.271673 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 13:50:04 crc kubenswrapper[4793]: I0130 13:50:04.473225 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 13:50:04 crc kubenswrapper[4793]: I0130 13:50:04.582635 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.124011 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.147316 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.362614 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.654889 4793 generic.go:334] "Generic (PLEG): container finished" podID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerID="6dc475d841ad7ccf7189817179fb736d89bc63690c21b60627e67fc5789a286b" exitCode=0 Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.654933 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" event={"ID":"bb9452c1-1f30-4fd9-aaf3-49fd8266818d","Type":"ContainerDied","Data":"6dc475d841ad7ccf7189817179fb736d89bc63690c21b60627e67fc5789a286b"} Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.655397 4793 scope.go:117] "RemoveContainer" containerID="6dc475d841ad7ccf7189817179fb736d89bc63690c21b60627e67fc5789a286b" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.658699 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.746004 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.746861 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:50:05 crc kubenswrapper[4793]: E0130 13:50:05.747110 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.110547 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.268353 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.403613 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.661776 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" event={"ID":"bb9452c1-1f30-4fd9-aaf3-49fd8266818d","Type":"ContainerStarted","Data":"c2225bef18ba9d885e8be28ad827b878179ba99db76f684234a752622dd76290"} Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.662102 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.665290 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.689173 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.727346 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.896032 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.918312 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.381568 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.408399 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.695827 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.781691 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.790953 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.868687 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.872325 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.924949 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 13:50:08 crc kubenswrapper[4793]: I0130 13:50:08.067503 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 13:50:08 crc kubenswrapper[4793]: I0130 13:50:08.101856 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 13:50:08 crc kubenswrapper[4793]: I0130 13:50:08.107640 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:50:08 crc kubenswrapper[4793]: I0130 13:50:08.646659 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 13:50:09 crc kubenswrapper[4793]: I0130 13:50:09.241736 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 13:50:09 crc kubenswrapper[4793]: I0130 13:50:09.294144 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:50:09 crc kubenswrapper[4793]: I0130 13:50:09.443147 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 13:50:09 crc kubenswrapper[4793]: I0130 13:50:09.536712 4793 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 13:50:09 crc kubenswrapper[4793]: I0130 13:50:09.903730 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 13:50:09 crc kubenswrapper[4793]: I0130 13:50:09.991376 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.015781 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.129946 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.244620 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.257672 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.296153 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.341278 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.406958 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.447030 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.683094 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-56656f9798-h5zfs_7c31ba39-5ef3-458b-89c1-eb43adfa3d7f/machine-approver-controller/0.log" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.683911 4793 generic.go:334] "Generic (PLEG): container finished" podID="7c31ba39-5ef3-458b-89c1-eb43adfa3d7f" containerID="0da33b576395a991ab5923fecbb1f6438080aff6f085708f99e9123cfd200b10" exitCode=255 Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.683952 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" event={"ID":"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f","Type":"ContainerDied","Data":"0da33b576395a991ab5923fecbb1f6438080aff6f085708f99e9123cfd200b10"} Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.684954 4793 scope.go:117] "RemoveContainer" containerID="0da33b576395a991ab5923fecbb1f6438080aff6f085708f99e9123cfd200b10" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.814128 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.868713 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.972173 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.185486 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.326864 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.349731 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.431312 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.666478 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.691768 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-56656f9798-h5zfs_7c31ba39-5ef3-458b-89c1-eb43adfa3d7f/machine-approver-controller/0.log" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.692150 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" event={"ID":"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f","Type":"ContainerStarted","Data":"0b700c53562ddd958f0820e4e1e832563a04eae702566772395e92ffa66383fc"} Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.699352 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.708555 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.717829 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.872754 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.934846 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.049507 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.097882 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.193547 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.413531 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.413600 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.598782 4793 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.844293 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 13:50:13 crc kubenswrapper[4793]: I0130 13:50:13.199757 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 13:50:13 crc kubenswrapper[4793]: I0130 13:50:13.541812 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 13:50:13 crc kubenswrapper[4793]: I0130 13:50:13.790043 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 13:50:13 crc kubenswrapper[4793]: I0130 13:50:13.855535 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 13:50:13 crc kubenswrapper[4793]: I0130 13:50:13.895707 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.340207 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.398429 4793 scope.go:117] "RemoveContainer" containerID="598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68" Jan 30 13:50:14 crc kubenswrapper[4793]: E0130 13:50:14.398649 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.450648 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.455125 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.817103 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.877750 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.924548 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.931956 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.965981 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.089030 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.181740 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.189411 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.203846 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.261327 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.284007 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.376916 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.426448 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.517478 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.617814 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.637624 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.763794 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.873169 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.961390 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 13:50:16 crc kubenswrapper[4793]: I0130 13:50:16.041069 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 13:50:16 crc kubenswrapper[4793]: I0130 13:50:16.253107 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 13:50:16 crc kubenswrapper[4793]: I0130 13:50:16.523637 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 13:50:16 crc kubenswrapper[4793]: I0130 13:50:16.673745 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.052946 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.938757 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.939505 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.943211 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 13:50:17 crc kubenswrapper[4793]: E0130 13:50:17.943514 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.955282 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.955963 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.958385 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.964951 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 13:50:18 crc kubenswrapper[4793]: I0130 13:50:18.127441 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 13:50:18 crc kubenswrapper[4793]: I0130 13:50:18.234405 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 13:50:18 crc kubenswrapper[4793]: I0130 13:50:18.754619 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 13:50:19 crc kubenswrapper[4793]: I0130 13:50:19.114370 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 13:50:19 crc kubenswrapper[4793]: I0130 13:50:19.350658 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 13:50:19 crc kubenswrapper[4793]: I0130 13:50:19.424703 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 13:50:19 crc kubenswrapper[4793]: I0130 13:50:19.454332 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 13:50:19 crc kubenswrapper[4793]: I0130 13:50:19.553974 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 13:50:20 crc kubenswrapper[4793]: I0130 13:50:20.006446 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 13:50:20 crc kubenswrapper[4793]: I0130 13:50:20.246231 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 13:50:20 crc kubenswrapper[4793]: I0130 13:50:20.699486 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 13:50:20 crc kubenswrapper[4793]: I0130 13:50:20.984361 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 13:50:21 crc kubenswrapper[4793]: I0130 13:50:21.068824 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 13:50:21 crc kubenswrapper[4793]: I0130 13:50:21.213468 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 13:50:21 crc kubenswrapper[4793]: I0130 13:50:21.461453 4793 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 13:50:21 crc kubenswrapper[4793]: I0130 13:50:21.973157 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:50:22 crc kubenswrapper[4793]: I0130 13:50:22.041875 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:50:22 crc kubenswrapper[4793]: I0130 13:50:22.218208 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 13:50:22 crc kubenswrapper[4793]: I0130 13:50:22.329210 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 13:50:22 crc kubenswrapper[4793]: I0130 13:50:22.668189 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:50:24 crc kubenswrapper[4793]: I0130 13:50:24.393727 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 13:50:24 crc kubenswrapper[4793]: I0130 13:50:24.740833 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 13:50:24 crc kubenswrapper[4793]: I0130 13:50:24.807505 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 13:50:24 crc kubenswrapper[4793]: I0130 13:50:24.822034 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 13:50:24 crc kubenswrapper[4793]: I0130 13:50:24.849816 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 13:50:24 crc kubenswrapper[4793]: I0130 13:50:24.871275 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 13:50:25 crc kubenswrapper[4793]: I0130 13:50:25.082350 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 13:50:25 crc kubenswrapper[4793]: I0130 13:50:25.216737 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 13:50:25 crc kubenswrapper[4793]: I0130 13:50:25.400465 4793 scope.go:117] "RemoveContainer" containerID="598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68" Jan 30 13:50:25 crc kubenswrapper[4793]: E0130 13:50:25.400879 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:50:27 crc kubenswrapper[4793]: I0130 13:50:27.093592 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.360370 4793 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.362019 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6qnl2" podStartSLOduration=127.975115058 podStartE2EDuration="4m25.362003817s" podCreationTimestamp="2026-01-30 13:46:04 +0000 UTC" firstStartedPulling="2026-01-30 13:46:08.896746941 +0000 UTC m=+179.598095432" lastFinishedPulling="2026-01-30 13:48:26.28363566 +0000 UTC m=+316.984984191" observedRunningTime="2026-01-30 13:48:56.487145353 +0000 UTC m=+347.188493854" watchObservedRunningTime="2026-01-30 13:50:29.362003817 +0000 UTC m=+440.063352308" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.362565 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vn6kf" podStartSLOduration=106.21057781 podStartE2EDuration="4m22.362558482s" podCreationTimestamp="2026-01-30 13:46:07 +0000 UTC" firstStartedPulling="2026-01-30 13:46:08.912918036 +0000 UTC m=+179.614266537" lastFinishedPulling="2026-01-30 13:48:45.064898718 +0000 UTC m=+335.766247209" observedRunningTime="2026-01-30 13:48:56.308941991 +0000 UTC m=+347.010290492" watchObservedRunningTime="2026-01-30 13:50:29.362558482 +0000 UTC m=+440.063906973" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.362951 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=145.362944431 podStartE2EDuration="2m25.362944431s" podCreationTimestamp="2026-01-30 13:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:48:56.372216544 +0000 UTC m=+347.073565035" watchObservedRunningTime="2026-01-30 13:50:29.362944431 +0000 UTC m=+440.064292922" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.363112 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kvlgd" podStartSLOduration=138.765511015 podStartE2EDuration="4m23.363107585s" podCreationTimestamp="2026-01-30 13:46:06 +0000 UTC" firstStartedPulling="2026-01-30 13:46:08.833253934 +0000 UTC m=+179.534602425" lastFinishedPulling="2026-01-30 13:48:13.430850504 +0000 UTC m=+304.132198995" observedRunningTime="2026-01-30 13:48:56.422223518 +0000 UTC m=+347.123572019" watchObservedRunningTime="2026-01-30 13:50:29.363107585 +0000 UTC m=+440.064456086" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.363377 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mn7sx" podStartSLOduration=122.051402863 podStartE2EDuration="4m23.363373772s" podCreationTimestamp="2026-01-30 13:46:06 +0000 UTC" firstStartedPulling="2026-01-30 13:46:08.850816305 +0000 UTC m=+179.552164796" lastFinishedPulling="2026-01-30 13:48:30.162787214 +0000 UTC m=+320.864135705" observedRunningTime="2026-01-30 13:48:56.472938511 +0000 UTC m=+347.174287022" watchObservedRunningTime="2026-01-30 13:50:29.363373772 +0000 UTC m=+440.064722263" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.363708 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j4vzj" podStartSLOduration=106.556875383 podStartE2EDuration="4m25.36370382s" podCreationTimestamp="2026-01-30 13:46:04 +0000 UTC" firstStartedPulling="2026-01-30 13:46:07.756077379 +0000 UTC m=+178.457425870" lastFinishedPulling="2026-01-30 13:48:46.562905816 +0000 UTC m=+337.264254307" observedRunningTime="2026-01-30 13:48:56.389879553 +0000 UTC m=+347.091228054" watchObservedRunningTime="2026-01-30 13:50:29.36370382 +0000 UTC m=+440.065052311" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.364332 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g9t8x" podStartSLOduration=109.566719487 podStartE2EDuration="4m26.364327746s" podCreationTimestamp="2026-01-30 13:46:03 +0000 UTC" firstStartedPulling="2026-01-30 13:46:08.884207962 +0000 UTC m=+179.585556453" lastFinishedPulling="2026-01-30 13:48:45.681816201 +0000 UTC m=+336.383164712" observedRunningTime="2026-01-30 13:48:56.454123791 +0000 UTC m=+347.155472302" watchObservedRunningTime="2026-01-30 13:50:29.364327746 +0000 UTC m=+440.065676227" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.365158 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fxl8f" podStartSLOduration=115.082957565 podStartE2EDuration="4m22.365152277s" podCreationTimestamp="2026-01-30 13:46:07 +0000 UTC" firstStartedPulling="2026-01-30 13:46:09.922379222 +0000 UTC m=+180.623727703" lastFinishedPulling="2026-01-30 13:48:37.204573924 +0000 UTC m=+327.905922415" observedRunningTime="2026-01-30 13:48:56.4352179 +0000 UTC m=+347.136566421" watchObservedRunningTime="2026-01-30 13:50:29.365152277 +0000 UTC m=+440.066500768" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.366916 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/community-operators-9t46g"] Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.367081 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.367201 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74b476d486-lccjp","openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl"] Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.367455 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" podUID="11837748-ddd9-46ac-8f23-b0b77c511c39" containerName="route-controller-manager" containerID="cri-o://f20e6d0a2f5f4dcf508e55d955774b064398a8134d06063fb2bd0bca37715f3b" gracePeriod=30 Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.367478 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.367727 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.367748 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" containerID="cri-o://c2225bef18ba9d885e8be28ad827b878179ba99db76f684234a752622dd76290" gracePeriod=30 Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.395528 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=93.395509127 podStartE2EDuration="1m33.395509127s" podCreationTimestamp="2026-01-30 13:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:50:29.391428534 +0000 UTC m=+440.092777015" watchObservedRunningTime="2026-01-30 13:50:29.395509127 +0000 UTC m=+440.096857628" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.421333 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.421837 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.426882 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.789924 4793 generic.go:334] "Generic (PLEG): container finished" podID="11837748-ddd9-46ac-8f23-b0b77c511c39" containerID="f20e6d0a2f5f4dcf508e55d955774b064398a8134d06063fb2bd0bca37715f3b" exitCode=0 Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.790022 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" event={"ID":"11837748-ddd9-46ac-8f23-b0b77c511c39","Type":"ContainerDied","Data":"f20e6d0a2f5f4dcf508e55d955774b064398a8134d06063fb2bd0bca37715f3b"} Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.792600 4793 generic.go:334] "Generic (PLEG): container finished" podID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerID="c2225bef18ba9d885e8be28ad827b878179ba99db76f684234a752622dd76290" exitCode=0 Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.792770 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" event={"ID":"bb9452c1-1f30-4fd9-aaf3-49fd8266818d","Type":"ContainerDied","Data":"c2225bef18ba9d885e8be28ad827b878179ba99db76f684234a752622dd76290"} Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.792910 4793 scope.go:117] "RemoveContainer" containerID="6dc475d841ad7ccf7189817179fb736d89bc63690c21b60627e67fc5789a286b" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.797140 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.198769 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.199426 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.228818 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk"] Jan 30 13:50:30 crc kubenswrapper[4793]: E0130 13:50:30.229083 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" containerName="installer" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229097 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" containerName="installer" Jan 30 13:50:30 crc kubenswrapper[4793]: E0130 13:50:30.229107 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229114 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: E0130 13:50:30.229128 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11837748-ddd9-46ac-8f23-b0b77c511c39" containerName="route-controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229134 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="11837748-ddd9-46ac-8f23-b0b77c511c39" containerName="route-controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229235 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229251 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" containerName="installer" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229261 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229269 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="11837748-ddd9-46ac-8f23-b0b77c511c39" containerName="route-controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229707 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.233995 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk"] Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354465 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-config\") pod \"11837748-ddd9-46ac-8f23-b0b77c511c39\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354529 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-client-ca\") pod \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354572 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-config\") pod \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354633 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94q78\" (UniqueName: \"kubernetes.io/projected/11837748-ddd9-46ac-8f23-b0b77c511c39-kube-api-access-94q78\") pod \"11837748-ddd9-46ac-8f23-b0b77c511c39\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354669 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clpjz\" (UniqueName: \"kubernetes.io/projected/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-kube-api-access-clpjz\") pod \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354690 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-client-ca\") pod \"11837748-ddd9-46ac-8f23-b0b77c511c39\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354713 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11837748-ddd9-46ac-8f23-b0b77c511c39-serving-cert\") pod \"11837748-ddd9-46ac-8f23-b0b77c511c39\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354750 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-serving-cert\") pod \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354769 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-proxy-ca-bundles\") pod \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354949 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-config\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354986 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-client-ca\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.355028 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-serving-cert\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.355074 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9dpw\" (UniqueName: \"kubernetes.io/projected/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-kube-api-access-l9dpw\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.355615 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-client-ca" (OuterVolumeSpecName: "client-ca") pod "bb9452c1-1f30-4fd9-aaf3-49fd8266818d" (UID: "bb9452c1-1f30-4fd9-aaf3-49fd8266818d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.355694 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-config" (OuterVolumeSpecName: "config") pod "11837748-ddd9-46ac-8f23-b0b77c511c39" (UID: "11837748-ddd9-46ac-8f23-b0b77c511c39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.355704 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-config" (OuterVolumeSpecName: "config") pod "bb9452c1-1f30-4fd9-aaf3-49fd8266818d" (UID: "bb9452c1-1f30-4fd9-aaf3-49fd8266818d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.356607 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-client-ca" (OuterVolumeSpecName: "client-ca") pod "11837748-ddd9-46ac-8f23-b0b77c511c39" (UID: "11837748-ddd9-46ac-8f23-b0b77c511c39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.356806 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bb9452c1-1f30-4fd9-aaf3-49fd8266818d" (UID: "bb9452c1-1f30-4fd9-aaf3-49fd8266818d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.361041 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bb9452c1-1f30-4fd9-aaf3-49fd8266818d" (UID: "bb9452c1-1f30-4fd9-aaf3-49fd8266818d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.361241 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-kube-api-access-clpjz" (OuterVolumeSpecName: "kube-api-access-clpjz") pod "bb9452c1-1f30-4fd9-aaf3-49fd8266818d" (UID: "bb9452c1-1f30-4fd9-aaf3-49fd8266818d"). InnerVolumeSpecName "kube-api-access-clpjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.361265 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11837748-ddd9-46ac-8f23-b0b77c511c39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "11837748-ddd9-46ac-8f23-b0b77c511c39" (UID: "11837748-ddd9-46ac-8f23-b0b77c511c39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.361719 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11837748-ddd9-46ac-8f23-b0b77c511c39-kube-api-access-94q78" (OuterVolumeSpecName: "kube-api-access-94q78") pod "11837748-ddd9-46ac-8f23-b0b77c511c39" (UID: "11837748-ddd9-46ac-8f23-b0b77c511c39"). InnerVolumeSpecName "kube-api-access-94q78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.405937 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551044e9-867a-4307-a28c-ea34bab39473" path="/var/lib/kubelet/pods/551044e9-867a-4307-a28c-ea34bab39473/volumes" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456397 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-config\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456490 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-client-ca\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456537 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-serving-cert\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456567 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9dpw\" (UniqueName: \"kubernetes.io/projected/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-kube-api-access-l9dpw\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456631 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456644 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456672 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456684 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456695 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94q78\" (UniqueName: \"kubernetes.io/projected/11837748-ddd9-46ac-8f23-b0b77c511c39-kube-api-access-94q78\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456708 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clpjz\" (UniqueName: \"kubernetes.io/projected/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-kube-api-access-clpjz\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456717 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456726 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11837748-ddd9-46ac-8f23-b0b77c511c39-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456805 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.458545 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-client-ca\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.459004 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-config\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.464554 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-serving-cert\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.474417 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9dpw\" (UniqueName: \"kubernetes.io/projected/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-kube-api-access-l9dpw\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.554373 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.738452 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk"] Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.799338 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" event={"ID":"11837748-ddd9-46ac-8f23-b0b77c511c39","Type":"ContainerDied","Data":"7dc9d90c1797415bdef39e7d33ab7879a133a25249498487ec03f24fae4459fc"} Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.799393 4793 scope.go:117] "RemoveContainer" containerID="f20e6d0a2f5f4dcf508e55d955774b064398a8134d06063fb2bd0bca37715f3b" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.799484 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.802747 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" event={"ID":"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4","Type":"ContainerStarted","Data":"a468e3c27d0a2cd913ba2f2058976b9b7319433f6282b4c4fb42aa2a1b0b5981"} Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.805715 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" event={"ID":"bb9452c1-1f30-4fd9-aaf3-49fd8266818d","Type":"ContainerDied","Data":"a76af574ae39e77263355b1e3c87d747ab2f9d1604f79be4a37d4e9cca505251"} Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.805956 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.807857 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.829173 4793 scope.go:117] "RemoveContainer" containerID="c2225bef18ba9d885e8be28ad827b878179ba99db76f684234a752622dd76290" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.845883 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl"] Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.848871 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl"] Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.860744 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74b476d486-lccjp"] Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.865073 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-74b476d486-lccjp"] Jan 30 13:50:31 crc kubenswrapper[4793]: I0130 13:50:31.813936 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" event={"ID":"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4","Type":"ContainerStarted","Data":"428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d"} Jan 30 13:50:31 crc kubenswrapper[4793]: I0130 13:50:31.814490 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:31 crc kubenswrapper[4793]: I0130 13:50:31.820519 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:31 crc kubenswrapper[4793]: I0130 13:50:31.863551 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" podStartSLOduration=10.863534741 podStartE2EDuration="10.863534741s" podCreationTimestamp="2026-01-30 13:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:50:31.832633077 +0000 UTC m=+442.533981568" watchObservedRunningTime="2026-01-30 13:50:31.863534741 +0000 UTC m=+442.564883232" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.386604 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-678d9b98d-rdzsn"] Jan 30 13:50:32 crc kubenswrapper[4793]: E0130 13:50:32.387329 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.387565 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.388639 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.393431 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.393775 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.393993 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.394740 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.395177 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.395512 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.401795 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.409386 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11837748-ddd9-46ac-8f23-b0b77c511c39" path="/var/lib/kubelet/pods/11837748-ddd9-46ac-8f23-b0b77c511c39/volumes" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.409986 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" path="/var/lib/kubelet/pods/bb9452c1-1f30-4fd9-aaf3-49fd8266818d/volumes" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.410711 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678d9b98d-rdzsn"] Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.481652 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtl44\" (UniqueName: \"kubernetes.io/projected/75d0c552-96c4-4117-81ac-2b5a0007db12-kube-api-access-mtl44\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.481741 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-client-ca\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.481781 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d0c552-96c4-4117-81ac-2b5a0007db12-serving-cert\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.481813 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-config\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.481831 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-proxy-ca-bundles\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.583207 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-config\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.583282 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-proxy-ca-bundles\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.583342 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtl44\" (UniqueName: \"kubernetes.io/projected/75d0c552-96c4-4117-81ac-2b5a0007db12-kube-api-access-mtl44\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.583376 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-client-ca\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.583421 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d0c552-96c4-4117-81ac-2b5a0007db12-serving-cert\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.584608 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-client-ca\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.584721 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-config\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.585180 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-proxy-ca-bundles\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.589269 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d0c552-96c4-4117-81ac-2b5a0007db12-serving-cert\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.604476 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtl44\" (UniqueName: \"kubernetes.io/projected/75d0c552-96c4-4117-81ac-2b5a0007db12-kube-api-access-mtl44\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.721821 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.908951 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678d9b98d-rdzsn"] Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.398728 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:50:33 crc kubenswrapper[4793]: E0130 13:50:33.399389 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.695257 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.830456 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" event={"ID":"75d0c552-96c4-4117-81ac-2b5a0007db12","Type":"ContainerStarted","Data":"abfdf91a9caa3ef9ef94ef207277a715338726c7d1101068e1fea87caabe98c1"} Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.830517 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" event={"ID":"75d0c552-96c4-4117-81ac-2b5a0007db12","Type":"ContainerStarted","Data":"47aac0b2bf64b7e243b79435312f754a331791df342df5adc5c356c115ed01e4"} Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.831178 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.836724 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.851071 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" podStartSLOduration=12.851028049 podStartE2EDuration="12.851028049s" podCreationTimestamp="2026-01-30 13:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:50:33.845629712 +0000 UTC m=+444.546978203" watchObservedRunningTime="2026-01-30 13:50:33.851028049 +0000 UTC m=+444.552376540" Jan 30 13:50:36 crc kubenswrapper[4793]: I0130 13:50:36.961539 4793 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:50:36 crc kubenswrapper[4793]: I0130 13:50:36.963254 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205" gracePeriod=5 Jan 30 13:50:37 crc kubenswrapper[4793]: I0130 13:50:37.398597 4793 scope.go:117] "RemoveContainer" containerID="598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68" Jan 30 13:50:37 crc kubenswrapper[4793]: I0130 13:50:37.854134 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/3.log" Jan 30 13:50:37 crc kubenswrapper[4793]: I0130 13:50:37.854189 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cee855567135ea7489148ef33099b1918e9db05d7b89d2d000c91a4eeef3da3c"} Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.413588 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.413928 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.413981 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.414631 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb80942b6e6f56f06d5a97a5c92cee45946524669b2d3f8777363114c1c78ea4"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.414694 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://eb80942b6e6f56f06d5a97a5c92cee45946524669b2d3f8777363114c1c78ea4" gracePeriod=600 Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.562301 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.562374 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.731128 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.731733 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.733296 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.733430 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.731265 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.731839 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.733384 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.733504 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.733527 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.734408 4793 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.734508 4793 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.734590 4793 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.734732 4793 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.740499 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.835675 4793 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.880832 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.880880 4793 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205" exitCode=137 Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.880949 4793 scope.go:117] "RemoveContainer" containerID="33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.880962 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.887112 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="eb80942b6e6f56f06d5a97a5c92cee45946524669b2d3f8777363114c1c78ea4" exitCode=0 Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.887149 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"eb80942b6e6f56f06d5a97a5c92cee45946524669b2d3f8777363114c1c78ea4"} Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.887187 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"da1bd3d911e39105fb6fe0014eb41a36c6a445fb3c02ca872cc47e861a75515a"} Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.910500 4793 scope.go:117] "RemoveContainer" containerID="33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205" Jan 30 13:50:42 crc kubenswrapper[4793]: E0130 13:50:42.910957 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205\": container with ID starting with 33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205 not found: ID does not exist" containerID="33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.911001 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205"} err="failed to get container status \"33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205\": rpc error: code = NotFound desc = could not find container \"33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205\": container with ID starting with 33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205 not found: ID does not exist" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.911066 4793 scope.go:117] "RemoveContainer" containerID="3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629" Jan 30 13:50:44 crc kubenswrapper[4793]: I0130 13:50:44.408928 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 13:50:44 crc kubenswrapper[4793]: I0130 13:50:44.409650 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 13:50:44 crc kubenswrapper[4793]: I0130 13:50:44.424033 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:50:44 crc kubenswrapper[4793]: I0130 13:50:44.424091 4793 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6a002524-7583-4bfa-b6eb-cb91eb1be877" Jan 30 13:50:44 crc kubenswrapper[4793]: I0130 13:50:44.430759 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:50:44 crc kubenswrapper[4793]: I0130 13:50:44.430815 4793 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6a002524-7583-4bfa-b6eb-cb91eb1be877" Jan 30 13:50:45 crc kubenswrapper[4793]: I0130 13:50:45.598133 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-678d9b98d-rdzsn"] Jan 30 13:50:45 crc kubenswrapper[4793]: I0130 13:50:45.598643 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" podUID="75d0c552-96c4-4117-81ac-2b5a0007db12" containerName="controller-manager" containerID="cri-o://abfdf91a9caa3ef9ef94ef207277a715338726c7d1101068e1fea87caabe98c1" gracePeriod=30 Jan 30 13:50:45 crc kubenswrapper[4793]: I0130 13:50:45.608898 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk"] Jan 30 13:50:45 crc kubenswrapper[4793]: I0130 13:50:45.609416 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" podUID="b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" containerName="route-controller-manager" containerID="cri-o://428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d" gracePeriod=30 Jan 30 13:50:45 crc kubenswrapper[4793]: I0130 13:50:45.906634 4793 generic.go:334] "Generic (PLEG): container finished" podID="75d0c552-96c4-4117-81ac-2b5a0007db12" containerID="abfdf91a9caa3ef9ef94ef207277a715338726c7d1101068e1fea87caabe98c1" exitCode=0 Jan 30 13:50:45 crc kubenswrapper[4793]: I0130 13:50:45.906928 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" event={"ID":"75d0c552-96c4-4117-81ac-2b5a0007db12","Type":"ContainerDied","Data":"abfdf91a9caa3ef9ef94ef207277a715338726c7d1101068e1fea87caabe98c1"} Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.399077 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.592997 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.676555 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.681681 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9dpw\" (UniqueName: \"kubernetes.io/projected/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-kube-api-access-l9dpw\") pod \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.681741 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-serving-cert\") pod \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.681768 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtl44\" (UniqueName: \"kubernetes.io/projected/75d0c552-96c4-4117-81ac-2b5a0007db12-kube-api-access-mtl44\") pod \"75d0c552-96c4-4117-81ac-2b5a0007db12\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.681794 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-client-ca\") pod \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.681810 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d0c552-96c4-4117-81ac-2b5a0007db12-serving-cert\") pod \"75d0c552-96c4-4117-81ac-2b5a0007db12\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.682491 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-client-ca" (OuterVolumeSpecName: "client-ca") pod "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" (UID: "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.681840 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-config\") pod \"75d0c552-96c4-4117-81ac-2b5a0007db12\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.682879 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-proxy-ca-bundles\") pod \"75d0c552-96c4-4117-81ac-2b5a0007db12\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.682907 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-config\") pod \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.682926 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-client-ca\") pod \"75d0c552-96c4-4117-81ac-2b5a0007db12\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.683082 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.683627 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "75d0c552-96c4-4117-81ac-2b5a0007db12" (UID: "75d0c552-96c4-4117-81ac-2b5a0007db12"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.683642 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-client-ca" (OuterVolumeSpecName: "client-ca") pod "75d0c552-96c4-4117-81ac-2b5a0007db12" (UID: "75d0c552-96c4-4117-81ac-2b5a0007db12"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.684291 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-config" (OuterVolumeSpecName: "config") pod "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" (UID: "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.684456 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-config" (OuterVolumeSpecName: "config") pod "75d0c552-96c4-4117-81ac-2b5a0007db12" (UID: "75d0c552-96c4-4117-81ac-2b5a0007db12"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.701849 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d0c552-96c4-4117-81ac-2b5a0007db12-kube-api-access-mtl44" (OuterVolumeSpecName: "kube-api-access-mtl44") pod "75d0c552-96c4-4117-81ac-2b5a0007db12" (UID: "75d0c552-96c4-4117-81ac-2b5a0007db12"). InnerVolumeSpecName "kube-api-access-mtl44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.701978 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-kube-api-access-l9dpw" (OuterVolumeSpecName: "kube-api-access-l9dpw") pod "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" (UID: "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4"). InnerVolumeSpecName "kube-api-access-l9dpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.702830 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" (UID: "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.704508 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d0c552-96c4-4117-81ac-2b5a0007db12-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "75d0c552-96c4-4117-81ac-2b5a0007db12" (UID: "75d0c552-96c4-4117-81ac-2b5a0007db12"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784412 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784447 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtl44\" (UniqueName: \"kubernetes.io/projected/75d0c552-96c4-4117-81ac-2b5a0007db12-kube-api-access-mtl44\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784458 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d0c552-96c4-4117-81ac-2b5a0007db12-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784466 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784474 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784484 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784492 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784502 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9dpw\" (UniqueName: \"kubernetes.io/projected/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-kube-api-access-l9dpw\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.913295 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/3.log" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.913636 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerStarted","Data":"12a6dc8d1fe12e66c88c1e9af34c91aecbf032c69850554757bd6c716f87e793"} Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.914172 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915513 4793 generic.go:334] "Generic (PLEG): container finished" podID="b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" containerID="428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d" exitCode=0 Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915592 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" event={"ID":"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4","Type":"ContainerDied","Data":"428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d"} Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915615 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" event={"ID":"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4","Type":"ContainerDied","Data":"a468e3c27d0a2cd913ba2f2058976b9b7319433f6282b4c4fb42aa2a1b0b5981"} Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915634 4793 scope.go:117] "RemoveContainer" containerID="428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915690 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915724 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915893 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.921868 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" event={"ID":"75d0c552-96c4-4117-81ac-2b5a0007db12","Type":"ContainerDied","Data":"47aac0b2bf64b7e243b79435312f754a331791df342df5adc5c356c115ed01e4"} Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.921907 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.948751 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk"] Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.952012 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk"] Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.956461 4793 scope.go:117] "RemoveContainer" containerID="428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d" Jan 30 13:50:46 crc kubenswrapper[4793]: E0130 13:50:46.957631 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d\": container with ID starting with 428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d not found: ID does not exist" containerID="428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.957672 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d"} err="failed to get container status \"428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d\": rpc error: code = NotFound desc = could not find container \"428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d\": container with ID starting with 428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d not found: ID does not exist" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.957699 4793 scope.go:117] "RemoveContainer" containerID="abfdf91a9caa3ef9ef94ef207277a715338726c7d1101068e1fea87caabe98c1" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.962269 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-678d9b98d-rdzsn"] Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.967847 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-678d9b98d-rdzsn"] Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.406272 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md"] Jan 30 13:50:47 crc kubenswrapper[4793]: E0130 13:50:47.407595 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d0c552-96c4-4117-81ac-2b5a0007db12" containerName="controller-manager" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.407625 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d0c552-96c4-4117-81ac-2b5a0007db12" containerName="controller-manager" Jan 30 13:50:47 crc kubenswrapper[4793]: E0130 13:50:47.407649 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.407661 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 13:50:47 crc kubenswrapper[4793]: E0130 13:50:47.407685 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" containerName="route-controller-manager" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.407696 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" containerName="route-controller-manager" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.408561 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.408603 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" containerName="route-controller-manager" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.408616 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d0c552-96c4-4117-81ac-2b5a0007db12" containerName="controller-manager" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.409980 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.419139 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz"] Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.420544 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.420701 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.420865 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.421157 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.421325 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.421415 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.427358 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.429606 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.429833 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.429970 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.430210 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.430485 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.430694 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.441308 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.451969 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md"] Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.455268 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz"] Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493235 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-config\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493313 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-proxy-ca-bundles\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493491 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-config\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493579 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-client-ca\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493628 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn6jq\" (UniqueName: \"kubernetes.io/projected/eee2ee98-2b55-47c1-981f-dd0898b2bf63-kube-api-access-gn6jq\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493704 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2l2\" (UniqueName: \"kubernetes.io/projected/46946b58-1b0f-4def-8b3a-ea762612980a-kube-api-access-xr2l2\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493771 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-client-ca\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493865 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46946b58-1b0f-4def-8b3a-ea762612980a-serving-cert\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493890 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eee2ee98-2b55-47c1-981f-dd0898b2bf63-serving-cert\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.594955 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-config\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595072 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-client-ca\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595112 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn6jq\" (UniqueName: \"kubernetes.io/projected/eee2ee98-2b55-47c1-981f-dd0898b2bf63-kube-api-access-gn6jq\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595135 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr2l2\" (UniqueName: \"kubernetes.io/projected/46946b58-1b0f-4def-8b3a-ea762612980a-kube-api-access-xr2l2\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595157 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-client-ca\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595204 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46946b58-1b0f-4def-8b3a-ea762612980a-serving-cert\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595228 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eee2ee98-2b55-47c1-981f-dd0898b2bf63-serving-cert\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595313 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-config\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.596307 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-client-ca\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.596798 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-proxy-ca-bundles\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.597088 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-client-ca\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.597264 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-config\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.597760 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-config\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.600609 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46946b58-1b0f-4def-8b3a-ea762612980a-serving-cert\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.601172 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-proxy-ca-bundles\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.602866 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eee2ee98-2b55-47c1-981f-dd0898b2bf63-serving-cert\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.618702 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn6jq\" (UniqueName: \"kubernetes.io/projected/eee2ee98-2b55-47c1-981f-dd0898b2bf63-kube-api-access-gn6jq\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.618814 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr2l2\" (UniqueName: \"kubernetes.io/projected/46946b58-1b0f-4def-8b3a-ea762612980a-kube-api-access-xr2l2\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.752101 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.771953 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.948837 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.063772 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz"] Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.112228 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md"] Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.404654 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d0c552-96c4-4117-81ac-2b5a0007db12" path="/var/lib/kubelet/pods/75d0c552-96c4-4117-81ac-2b5a0007db12/volumes" Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.405681 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" path="/var/lib/kubelet/pods/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4/volumes" Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.958478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" event={"ID":"eee2ee98-2b55-47c1-981f-dd0898b2bf63","Type":"ContainerStarted","Data":"bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e"} Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.959610 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" event={"ID":"eee2ee98-2b55-47c1-981f-dd0898b2bf63","Type":"ContainerStarted","Data":"02125fb06afb5a468ca285614473441b8b7036e21ea110c4b7a0074fd7543686"} Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.960013 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.960135 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" event={"ID":"46946b58-1b0f-4def-8b3a-ea762612980a","Type":"ContainerStarted","Data":"871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d"} Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.960219 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" event={"ID":"46946b58-1b0f-4def-8b3a-ea762612980a","Type":"ContainerStarted","Data":"694d456dc5c8634cc2a3e1c82c98508ef3805387920ec823e200ed8493fd208d"} Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.968194 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:49 crc kubenswrapper[4793]: I0130 13:50:49.005624 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" podStartSLOduration=4.00560593 podStartE2EDuration="4.00560593s" podCreationTimestamp="2026-01-30 13:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:50:49.003405312 +0000 UTC m=+459.704753803" watchObservedRunningTime="2026-01-30 13:50:49.00560593 +0000 UTC m=+459.706954421" Jan 30 13:50:49 crc kubenswrapper[4793]: I0130 13:50:49.985094 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" podStartSLOduration=4.985073391 podStartE2EDuration="4.985073391s" podCreationTimestamp="2026-01-30 13:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:50:49.981853467 +0000 UTC m=+460.683201958" watchObservedRunningTime="2026-01-30 13:50:49.985073391 +0000 UTC m=+460.686421882" Jan 30 13:50:50 crc kubenswrapper[4793]: I0130 13:50:50.969840 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:50 crc kubenswrapper[4793]: I0130 13:50:50.975503 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:51:00 crc kubenswrapper[4793]: I0130 13:51:00.977015 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz"] Jan 30 13:51:00 crc kubenswrapper[4793]: I0130 13:51:00.979894 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" podUID="46946b58-1b0f-4def-8b3a-ea762612980a" containerName="controller-manager" containerID="cri-o://871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d" gracePeriod=30 Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.072498 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md"] Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.072726 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" podUID="eee2ee98-2b55-47c1-981f-dd0898b2bf63" containerName="route-controller-manager" containerID="cri-o://bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e" gracePeriod=30 Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.423512 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.579196 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eee2ee98-2b55-47c1-981f-dd0898b2bf63-serving-cert\") pod \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.579302 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn6jq\" (UniqueName: \"kubernetes.io/projected/eee2ee98-2b55-47c1-981f-dd0898b2bf63-kube-api-access-gn6jq\") pod \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.579334 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-client-ca\") pod \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.579357 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-config\") pod \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.580117 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-client-ca" (OuterVolumeSpecName: "client-ca") pod "eee2ee98-2b55-47c1-981f-dd0898b2bf63" (UID: "eee2ee98-2b55-47c1-981f-dd0898b2bf63"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.580824 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-config" (OuterVolumeSpecName: "config") pod "eee2ee98-2b55-47c1-981f-dd0898b2bf63" (UID: "eee2ee98-2b55-47c1-981f-dd0898b2bf63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.584778 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eee2ee98-2b55-47c1-981f-dd0898b2bf63-kube-api-access-gn6jq" (OuterVolumeSpecName: "kube-api-access-gn6jq") pod "eee2ee98-2b55-47c1-981f-dd0898b2bf63" (UID: "eee2ee98-2b55-47c1-981f-dd0898b2bf63"). InnerVolumeSpecName "kube-api-access-gn6jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.595753 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee2ee98-2b55-47c1-981f-dd0898b2bf63-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eee2ee98-2b55-47c1-981f-dd0898b2bf63" (UID: "eee2ee98-2b55-47c1-981f-dd0898b2bf63"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.680852 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn6jq\" (UniqueName: \"kubernetes.io/projected/eee2ee98-2b55-47c1-981f-dd0898b2bf63-kube-api-access-gn6jq\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.680895 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.680904 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.680912 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eee2ee98-2b55-47c1-981f-dd0898b2bf63-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.874989 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.983712 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr2l2\" (UniqueName: \"kubernetes.io/projected/46946b58-1b0f-4def-8b3a-ea762612980a-kube-api-access-xr2l2\") pod \"46946b58-1b0f-4def-8b3a-ea762612980a\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.983774 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-proxy-ca-bundles\") pod \"46946b58-1b0f-4def-8b3a-ea762612980a\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.983796 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-client-ca\") pod \"46946b58-1b0f-4def-8b3a-ea762612980a\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.983924 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-config\") pod \"46946b58-1b0f-4def-8b3a-ea762612980a\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.983943 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46946b58-1b0f-4def-8b3a-ea762612980a-serving-cert\") pod \"46946b58-1b0f-4def-8b3a-ea762612980a\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.984950 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-config" (OuterVolumeSpecName: "config") pod "46946b58-1b0f-4def-8b3a-ea762612980a" (UID: "46946b58-1b0f-4def-8b3a-ea762612980a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.985162 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-client-ca" (OuterVolumeSpecName: "client-ca") pod "46946b58-1b0f-4def-8b3a-ea762612980a" (UID: "46946b58-1b0f-4def-8b3a-ea762612980a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.985323 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "46946b58-1b0f-4def-8b3a-ea762612980a" (UID: "46946b58-1b0f-4def-8b3a-ea762612980a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.989269 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46946b58-1b0f-4def-8b3a-ea762612980a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "46946b58-1b0f-4def-8b3a-ea762612980a" (UID: "46946b58-1b0f-4def-8b3a-ea762612980a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.990334 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46946b58-1b0f-4def-8b3a-ea762612980a-kube-api-access-xr2l2" (OuterVolumeSpecName: "kube-api-access-xr2l2") pod "46946b58-1b0f-4def-8b3a-ea762612980a" (UID: "46946b58-1b0f-4def-8b3a-ea762612980a"). InnerVolumeSpecName "kube-api-access-xr2l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.026435 4793 generic.go:334] "Generic (PLEG): container finished" podID="46946b58-1b0f-4def-8b3a-ea762612980a" containerID="871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d" exitCode=0 Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.026506 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" event={"ID":"46946b58-1b0f-4def-8b3a-ea762612980a","Type":"ContainerDied","Data":"871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d"} Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.026576 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" event={"ID":"46946b58-1b0f-4def-8b3a-ea762612980a","Type":"ContainerDied","Data":"694d456dc5c8634cc2a3e1c82c98508ef3805387920ec823e200ed8493fd208d"} Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.026598 4793 scope.go:117] "RemoveContainer" containerID="871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.026609 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.028716 4793 generic.go:334] "Generic (PLEG): container finished" podID="eee2ee98-2b55-47c1-981f-dd0898b2bf63" containerID="bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e" exitCode=0 Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.028771 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.028746 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" event={"ID":"eee2ee98-2b55-47c1-981f-dd0898b2bf63","Type":"ContainerDied","Data":"bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e"} Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.029528 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" event={"ID":"eee2ee98-2b55-47c1-981f-dd0898b2bf63","Type":"ContainerDied","Data":"02125fb06afb5a468ca285614473441b8b7036e21ea110c4b7a0074fd7543686"} Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.050975 4793 scope.go:117] "RemoveContainer" containerID="871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d" Jan 30 13:51:02 crc kubenswrapper[4793]: E0130 13:51:02.051814 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d\": container with ID starting with 871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d not found: ID does not exist" containerID="871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.051855 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d"} err="failed to get container status \"871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d\": rpc error: code = NotFound desc = could not find container \"871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d\": container with ID starting with 871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d not found: ID does not exist" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.051883 4793 scope.go:117] "RemoveContainer" containerID="bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.072766 4793 scope.go:117] "RemoveContainer" containerID="bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e" Jan 30 13:51:02 crc kubenswrapper[4793]: E0130 13:51:02.073248 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e\": container with ID starting with bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e not found: ID does not exist" containerID="bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.073346 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e"} err="failed to get container status \"bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e\": rpc error: code = NotFound desc = could not find container \"bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e\": container with ID starting with bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e not found: ID does not exist" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.073455 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.076467 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.085973 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.086021 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46946b58-1b0f-4def-8b3a-ea762612980a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.086066 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr2l2\" (UniqueName: \"kubernetes.io/projected/46946b58-1b0f-4def-8b3a-ea762612980a-kube-api-access-xr2l2\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.086083 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.086097 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.086911 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.103239 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.405605 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46946b58-1b0f-4def-8b3a-ea762612980a" path="/var/lib/kubelet/pods/46946b58-1b0f-4def-8b3a-ea762612980a/volumes" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.406210 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eee2ee98-2b55-47c1-981f-dd0898b2bf63" path="/var/lib/kubelet/pods/eee2ee98-2b55-47c1-981f-dd0898b2bf63/volumes" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.410136 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk"] Jan 30 13:51:02 crc kubenswrapper[4793]: E0130 13:51:02.410385 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46946b58-1b0f-4def-8b3a-ea762612980a" containerName="controller-manager" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.410404 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="46946b58-1b0f-4def-8b3a-ea762612980a" containerName="controller-manager" Jan 30 13:51:02 crc kubenswrapper[4793]: E0130 13:51:02.410415 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee2ee98-2b55-47c1-981f-dd0898b2bf63" containerName="route-controller-manager" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.410424 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee2ee98-2b55-47c1-981f-dd0898b2bf63" containerName="route-controller-manager" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.410557 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="46946b58-1b0f-4def-8b3a-ea762612980a" containerName="controller-manager" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.410573 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee2ee98-2b55-47c1-981f-dd0898b2bf63" containerName="route-controller-manager" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.411034 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.413408 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.413942 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.414181 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.414338 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.414482 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.414977 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.416554 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b74cd585c-nn75n"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.417395 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.423853 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b74cd585c-nn75n"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.425477 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.425680 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.426024 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.426179 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.426443 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.426725 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.439206 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.478882 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594350 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvh9\" (UniqueName: \"kubernetes.io/projected/7a11e909-7bd4-4e65-bd54-61a34e199fc8-kube-api-access-6vvh9\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594505 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwtn\" (UniqueName: \"kubernetes.io/projected/1245271f-581f-4ad6-88a5-fc8df98d908d-kube-api-access-knwtn\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594561 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-proxy-ca-bundles\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594795 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-config\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594882 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a11e909-7bd4-4e65-bd54-61a34e199fc8-serving-cert\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594931 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-config\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594988 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1245271f-581f-4ad6-88a5-fc8df98d908d-serving-cert\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.595007 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-client-ca\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.595061 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-client-ca\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695663 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1245271f-581f-4ad6-88a5-fc8df98d908d-serving-cert\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695715 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-client-ca\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695739 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-client-ca\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695760 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvh9\" (UniqueName: \"kubernetes.io/projected/7a11e909-7bd4-4e65-bd54-61a34e199fc8-kube-api-access-6vvh9\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695788 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knwtn\" (UniqueName: \"kubernetes.io/projected/1245271f-581f-4ad6-88a5-fc8df98d908d-kube-api-access-knwtn\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695808 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-proxy-ca-bundles\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-config\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695872 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a11e909-7bd4-4e65-bd54-61a34e199fc8-serving-cert\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-config\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.696877 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-client-ca\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.696877 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-client-ca\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.697474 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-proxy-ca-bundles\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.697531 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-config\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.697963 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-config\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.702839 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a11e909-7bd4-4e65-bd54-61a34e199fc8-serving-cert\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.702893 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1245271f-581f-4ad6-88a5-fc8df98d908d-serving-cert\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.718914 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvh9\" (UniqueName: \"kubernetes.io/projected/7a11e909-7bd4-4e65-bd54-61a34e199fc8-kube-api-access-6vvh9\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.728189 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knwtn\" (UniqueName: \"kubernetes.io/projected/1245271f-581f-4ad6-88a5-fc8df98d908d-kube-api-access-knwtn\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.733859 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.747912 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.972716 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b74cd585c-nn75n"] Jan 30 13:51:02 crc kubenswrapper[4793]: W0130 13:51:02.982879 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a11e909_7bd4_4e65_bd54_61a34e199fc8.slice/crio-73cc7ac92e261ad0293e3752e2085be90be9ad023065c187bd33f01950036c6f WatchSource:0}: Error finding container 73cc7ac92e261ad0293e3752e2085be90be9ad023065c187bd33f01950036c6f: Status 404 returned error can't find the container with id 73cc7ac92e261ad0293e3752e2085be90be9ad023065c187bd33f01950036c6f Jan 30 13:51:03 crc kubenswrapper[4793]: I0130 13:51:03.011734 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk"] Jan 30 13:51:03 crc kubenswrapper[4793]: W0130 13:51:03.016143 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1245271f_581f_4ad6_88a5_fc8df98d908d.slice/crio-46759adc3b7bb4c6ec47d6365be3d50922aace4685bdab03dae1a7603d72e695 WatchSource:0}: Error finding container 46759adc3b7bb4c6ec47d6365be3d50922aace4685bdab03dae1a7603d72e695: Status 404 returned error can't find the container with id 46759adc3b7bb4c6ec47d6365be3d50922aace4685bdab03dae1a7603d72e695 Jan 30 13:51:03 crc kubenswrapper[4793]: I0130 13:51:03.035977 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" event={"ID":"1245271f-581f-4ad6-88a5-fc8df98d908d","Type":"ContainerStarted","Data":"46759adc3b7bb4c6ec47d6365be3d50922aace4685bdab03dae1a7603d72e695"} Jan 30 13:51:03 crc kubenswrapper[4793]: I0130 13:51:03.039251 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" event={"ID":"7a11e909-7bd4-4e65-bd54-61a34e199fc8","Type":"ContainerStarted","Data":"73cc7ac92e261ad0293e3752e2085be90be9ad023065c187bd33f01950036c6f"} Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.047659 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" event={"ID":"1245271f-581f-4ad6-88a5-fc8df98d908d","Type":"ContainerStarted","Data":"2a7c84a7d77a4a992aaa084de64dbea7ab714ae6261878fd4f6f7001e5a8a24d"} Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.048007 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.050122 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" event={"ID":"7a11e909-7bd4-4e65-bd54-61a34e199fc8","Type":"ContainerStarted","Data":"e1188b4a585d96fdedf9930e72dec5ac8fd06f99633ce9ac5a9ab4c8d741f7be"} Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.050520 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.054458 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.055219 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.084935 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" podStartSLOduration=3.084917673 podStartE2EDuration="3.084917673s" podCreationTimestamp="2026-01-30 13:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:04.066316196 +0000 UTC m=+474.767664697" watchObservedRunningTime="2026-01-30 13:51:04.084917673 +0000 UTC m=+474.786266154" Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.102021 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" podStartSLOduration=4.10199949 podStartE2EDuration="4.10199949s" podCreationTimestamp="2026-01-30 13:51:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:04.100727617 +0000 UTC m=+474.802076128" watchObservedRunningTime="2026-01-30 13:51:04.10199949 +0000 UTC m=+474.803347981" Jan 30 13:51:11 crc kubenswrapper[4793]: I0130 13:51:11.790213 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4vzj"] Jan 30 13:51:11 crc kubenswrapper[4793]: I0130 13:51:11.790724 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j4vzj" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="registry-server" containerID="cri-o://bca1d232355315db4731f9a23c3d510cb5c3560c5a03542708615d5cdb216d6c" gracePeriod=2 Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.103765 4793 generic.go:334] "Generic (PLEG): container finished" podID="02ec4db2-0283-437a-999f-d50a10ab046c" containerID="bca1d232355315db4731f9a23c3d510cb5c3560c5a03542708615d5cdb216d6c" exitCode=0 Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.104160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vzj" event={"ID":"02ec4db2-0283-437a-999f-d50a10ab046c","Type":"ContainerDied","Data":"bca1d232355315db4731f9a23c3d510cb5c3560c5a03542708615d5cdb216d6c"} Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.154914 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.312968 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-utilities\") pod \"02ec4db2-0283-437a-999f-d50a10ab046c\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.313127 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm6vk\" (UniqueName: \"kubernetes.io/projected/02ec4db2-0283-437a-999f-d50a10ab046c-kube-api-access-hm6vk\") pod \"02ec4db2-0283-437a-999f-d50a10ab046c\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.313158 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-catalog-content\") pod \"02ec4db2-0283-437a-999f-d50a10ab046c\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.314222 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-utilities" (OuterVolumeSpecName: "utilities") pod "02ec4db2-0283-437a-999f-d50a10ab046c" (UID: "02ec4db2-0283-437a-999f-d50a10ab046c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.319718 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ec4db2-0283-437a-999f-d50a10ab046c-kube-api-access-hm6vk" (OuterVolumeSpecName: "kube-api-access-hm6vk") pod "02ec4db2-0283-437a-999f-d50a10ab046c" (UID: "02ec4db2-0283-437a-999f-d50a10ab046c"). InnerVolumeSpecName "kube-api-access-hm6vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.359923 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02ec4db2-0283-437a-999f-d50a10ab046c" (UID: "02ec4db2-0283-437a-999f-d50a10ab046c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.391413 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn7sx"] Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.391684 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mn7sx" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="registry-server" containerID="cri-o://6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c" gracePeriod=2 Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.415927 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.415961 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm6vk\" (UniqueName: \"kubernetes.io/projected/02ec4db2-0283-437a-999f-d50a10ab046c-kube-api-access-hm6vk\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.415974 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.735739 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.920851 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-utilities\") pod \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.920907 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn89t\" (UniqueName: \"kubernetes.io/projected/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-kube-api-access-mn89t\") pod \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.920941 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-catalog-content\") pod \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.921622 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-utilities" (OuterVolumeSpecName: "utilities") pod "96451b9c-e42f-43ae-9f62-bc830fa1ad9d" (UID: "96451b9c-e42f-43ae-9f62-bc830fa1ad9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.922228 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.923539 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-kube-api-access-mn89t" (OuterVolumeSpecName: "kube-api-access-mn89t") pod "96451b9c-e42f-43ae-9f62-bc830fa1ad9d" (UID: "96451b9c-e42f-43ae-9f62-bc830fa1ad9d"). InnerVolumeSpecName "kube-api-access-mn89t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.942385 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96451b9c-e42f-43ae-9f62-bc830fa1ad9d" (UID: "96451b9c-e42f-43ae-9f62-bc830fa1ad9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.023266 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn89t\" (UniqueName: \"kubernetes.io/projected/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-kube-api-access-mn89t\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.023330 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.113213 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vzj" event={"ID":"02ec4db2-0283-437a-999f-d50a10ab046c","Type":"ContainerDied","Data":"ee249470c28be7e643027b7d1d76ee1a880e2751bfa6c780b72800ea7daeb066"} Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.113232 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.113283 4793 scope.go:117] "RemoveContainer" containerID="bca1d232355315db4731f9a23c3d510cb5c3560c5a03542708615d5cdb216d6c" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.116176 4793 generic.go:334] "Generic (PLEG): container finished" podID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerID="6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c" exitCode=0 Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.116205 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn7sx" event={"ID":"96451b9c-e42f-43ae-9f62-bc830fa1ad9d","Type":"ContainerDied","Data":"6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c"} Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.116233 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn7sx" event={"ID":"96451b9c-e42f-43ae-9f62-bc830fa1ad9d","Type":"ContainerDied","Data":"097e24f55ac27743bd9630217aba68c9f9433798eb25d4a7ca41ee8c4336a653"} Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.116246 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.134843 4793 scope.go:117] "RemoveContainer" containerID="b9519a38e06d14f0b9522f2ca7c944b5d849d5137311c5fba903cacfaefb9b67" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.146399 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4vzj"] Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.157691 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j4vzj"] Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.161258 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn7sx"] Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.167330 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn7sx"] Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.177910 4793 scope.go:117] "RemoveContainer" containerID="9d4a750d40d93b392b9501779e0e72734cfa6f671669f4891033addc84b52774" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.199013 4793 scope.go:117] "RemoveContainer" containerID="6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.221318 4793 scope.go:117] "RemoveContainer" containerID="7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.234658 4793 scope.go:117] "RemoveContainer" containerID="6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.254930 4793 scope.go:117] "RemoveContainer" containerID="6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c" Jan 30 13:51:13 crc kubenswrapper[4793]: E0130 13:51:13.255751 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c\": container with ID starting with 6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c not found: ID does not exist" containerID="6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.255800 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c"} err="failed to get container status \"6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c\": rpc error: code = NotFound desc = could not find container \"6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c\": container with ID starting with 6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c not found: ID does not exist" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.255831 4793 scope.go:117] "RemoveContainer" containerID="7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028" Jan 30 13:51:13 crc kubenswrapper[4793]: E0130 13:51:13.256277 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028\": container with ID starting with 7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028 not found: ID does not exist" containerID="7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.256298 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028"} err="failed to get container status \"7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028\": rpc error: code = NotFound desc = could not find container \"7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028\": container with ID starting with 7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028 not found: ID does not exist" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.256312 4793 scope.go:117] "RemoveContainer" containerID="6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf" Jan 30 13:51:13 crc kubenswrapper[4793]: E0130 13:51:13.256571 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf\": container with ID starting with 6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf not found: ID does not exist" containerID="6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.256646 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf"} err="failed to get container status \"6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf\": rpc error: code = NotFound desc = could not find container \"6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf\": container with ID starting with 6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf not found: ID does not exist" Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.408901 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" path="/var/lib/kubelet/pods/02ec4db2-0283-437a-999f-d50a10ab046c/volumes" Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.409649 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" path="/var/lib/kubelet/pods/96451b9c-e42f-43ae-9f62-bc830fa1ad9d/volumes" Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.593661 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fxl8f"] Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.593936 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fxl8f" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="registry-server" containerID="cri-o://7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087" gracePeriod=2 Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.929765 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.950699 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-catalog-content\") pod \"0005ba9f-0f70-4df4-b588-8e6f941fec61\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.950756 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-utilities\") pod \"0005ba9f-0f70-4df4-b588-8e6f941fec61\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.950800 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w4dd\" (UniqueName: \"kubernetes.io/projected/0005ba9f-0f70-4df4-b588-8e6f941fec61-kube-api-access-2w4dd\") pod \"0005ba9f-0f70-4df4-b588-8e6f941fec61\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.951822 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-utilities" (OuterVolumeSpecName: "utilities") pod "0005ba9f-0f70-4df4-b588-8e6f941fec61" (UID: "0005ba9f-0f70-4df4-b588-8e6f941fec61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.954598 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.986357 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0005ba9f-0f70-4df4-b588-8e6f941fec61-kube-api-access-2w4dd" (OuterVolumeSpecName: "kube-api-access-2w4dd") pod "0005ba9f-0f70-4df4-b588-8e6f941fec61" (UID: "0005ba9f-0f70-4df4-b588-8e6f941fec61"). InnerVolumeSpecName "kube-api-access-2w4dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.055390 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w4dd\" (UniqueName: \"kubernetes.io/projected/0005ba9f-0f70-4df4-b588-8e6f941fec61-kube-api-access-2w4dd\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.103761 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0005ba9f-0f70-4df4-b588-8e6f941fec61" (UID: "0005ba9f-0f70-4df4-b588-8e6f941fec61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.156906 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.160846 4793 generic.go:334] "Generic (PLEG): container finished" podID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerID="7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087" exitCode=0 Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.160913 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerDied","Data":"7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087"} Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.160960 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerDied","Data":"13f1368c8d56c2f3e8a8787fdd36533c727a2ee0ef9f036522e165e8dc981e1f"} Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.160980 4793 scope.go:117] "RemoveContainer" containerID="7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.161200 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.180528 4793 scope.go:117] "RemoveContainer" containerID="0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.204028 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fxl8f"] Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.204104 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fxl8f"] Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.222613 4793 scope.go:117] "RemoveContainer" containerID="11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.238334 4793 scope.go:117] "RemoveContainer" containerID="7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087" Jan 30 13:51:15 crc kubenswrapper[4793]: E0130 13:51:15.238837 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087\": container with ID starting with 7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087 not found: ID does not exist" containerID="7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.238896 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087"} err="failed to get container status \"7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087\": rpc error: code = NotFound desc = could not find container \"7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087\": container with ID starting with 7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087 not found: ID does not exist" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.238932 4793 scope.go:117] "RemoveContainer" containerID="0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d" Jan 30 13:51:15 crc kubenswrapper[4793]: E0130 13:51:15.239473 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d\": container with ID starting with 0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d not found: ID does not exist" containerID="0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.239499 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d"} err="failed to get container status \"0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d\": rpc error: code = NotFound desc = could not find container \"0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d\": container with ID starting with 0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d not found: ID does not exist" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.239518 4793 scope.go:117] "RemoveContainer" containerID="11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e" Jan 30 13:51:15 crc kubenswrapper[4793]: E0130 13:51:15.239850 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e\": container with ID starting with 11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e not found: ID does not exist" containerID="11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.239900 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e"} err="failed to get container status \"11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e\": rpc error: code = NotFound desc = could not find container \"11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e\": container with ID starting with 11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e not found: ID does not exist" Jan 30 13:51:16 crc kubenswrapper[4793]: I0130 13:51:16.404291 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" path="/var/lib/kubelet/pods/0005ba9f-0f70-4df4-b588-8e6f941fec61/volumes" Jan 30 13:51:20 crc kubenswrapper[4793]: I0130 13:51:20.956562 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b74cd585c-nn75n"] Jan 30 13:51:20 crc kubenswrapper[4793]: I0130 13:51:20.957011 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" podUID="7a11e909-7bd4-4e65-bd54-61a34e199fc8" containerName="controller-manager" containerID="cri-o://e1188b4a585d96fdedf9930e72dec5ac8fd06f99633ce9ac5a9ab4c8d741f7be" gracePeriod=30 Jan 30 13:51:20 crc kubenswrapper[4793]: I0130 13:51:20.978155 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk"] Jan 30 13:51:20 crc kubenswrapper[4793]: I0130 13:51:20.978629 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" podUID="1245271f-581f-4ad6-88a5-fc8df98d908d" containerName="route-controller-manager" containerID="cri-o://2a7c84a7d77a4a992aaa084de64dbea7ab714ae6261878fd4f6f7001e5a8a24d" gracePeriod=30 Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.195938 4793 generic.go:334] "Generic (PLEG): container finished" podID="1245271f-581f-4ad6-88a5-fc8df98d908d" containerID="2a7c84a7d77a4a992aaa084de64dbea7ab714ae6261878fd4f6f7001e5a8a24d" exitCode=0 Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.196028 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" event={"ID":"1245271f-581f-4ad6-88a5-fc8df98d908d","Type":"ContainerDied","Data":"2a7c84a7d77a4a992aaa084de64dbea7ab714ae6261878fd4f6f7001e5a8a24d"} Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.197776 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a11e909-7bd4-4e65-bd54-61a34e199fc8" containerID="e1188b4a585d96fdedf9930e72dec5ac8fd06f99633ce9ac5a9ab4c8d741f7be" exitCode=0 Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.197808 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" event={"ID":"7a11e909-7bd4-4e65-bd54-61a34e199fc8","Type":"ContainerDied","Data":"e1188b4a585d96fdedf9930e72dec5ac8fd06f99633ce9ac5a9ab4c8d741f7be"} Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.407269 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.587005 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a11e909-7bd4-4e65-bd54-61a34e199fc8-serving-cert\") pod \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.587038 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-proxy-ca-bundles\") pod \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.587271 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-config\") pod \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.587327 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vvh9\" (UniqueName: \"kubernetes.io/projected/7a11e909-7bd4-4e65-bd54-61a34e199fc8-kube-api-access-6vvh9\") pod \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.587343 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-client-ca\") pod \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.590321 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-config" (OuterVolumeSpecName: "config") pod "7a11e909-7bd4-4e65-bd54-61a34e199fc8" (UID: "7a11e909-7bd4-4e65-bd54-61a34e199fc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.590856 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a11e909-7bd4-4e65-bd54-61a34e199fc8" (UID: "7a11e909-7bd4-4e65-bd54-61a34e199fc8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.590848 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7a11e909-7bd4-4e65-bd54-61a34e199fc8" (UID: "7a11e909-7bd4-4e65-bd54-61a34e199fc8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.594804 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a11e909-7bd4-4e65-bd54-61a34e199fc8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a11e909-7bd4-4e65-bd54-61a34e199fc8" (UID: "7a11e909-7bd4-4e65-bd54-61a34e199fc8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.598138 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a11e909-7bd4-4e65-bd54-61a34e199fc8-kube-api-access-6vvh9" (OuterVolumeSpecName: "kube-api-access-6vvh9") pod "7a11e909-7bd4-4e65-bd54-61a34e199fc8" (UID: "7a11e909-7bd4-4e65-bd54-61a34e199fc8"). InnerVolumeSpecName "kube-api-access-6vvh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.644503 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.688784 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1245271f-581f-4ad6-88a5-fc8df98d908d-serving-cert\") pod \"1245271f-581f-4ad6-88a5-fc8df98d908d\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.688846 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-config\") pod \"1245271f-581f-4ad6-88a5-fc8df98d908d\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.688880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knwtn\" (UniqueName: \"kubernetes.io/projected/1245271f-581f-4ad6-88a5-fc8df98d908d-kube-api-access-knwtn\") pod \"1245271f-581f-4ad6-88a5-fc8df98d908d\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.688924 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-client-ca\") pod \"1245271f-581f-4ad6-88a5-fc8df98d908d\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.689068 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.689083 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.689092 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a11e909-7bd4-4e65-bd54-61a34e199fc8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.689101 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.689109 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vvh9\" (UniqueName: \"kubernetes.io/projected/7a11e909-7bd4-4e65-bd54-61a34e199fc8-kube-api-access-6vvh9\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.689732 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-client-ca" (OuterVolumeSpecName: "client-ca") pod "1245271f-581f-4ad6-88a5-fc8df98d908d" (UID: "1245271f-581f-4ad6-88a5-fc8df98d908d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.690456 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-config" (OuterVolumeSpecName: "config") pod "1245271f-581f-4ad6-88a5-fc8df98d908d" (UID: "1245271f-581f-4ad6-88a5-fc8df98d908d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.694123 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1245271f-581f-4ad6-88a5-fc8df98d908d-kube-api-access-knwtn" (OuterVolumeSpecName: "kube-api-access-knwtn") pod "1245271f-581f-4ad6-88a5-fc8df98d908d" (UID: "1245271f-581f-4ad6-88a5-fc8df98d908d"). InnerVolumeSpecName "kube-api-access-knwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.694837 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1245271f-581f-4ad6-88a5-fc8df98d908d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1245271f-581f-4ad6-88a5-fc8df98d908d" (UID: "1245271f-581f-4ad6-88a5-fc8df98d908d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.789725 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1245271f-581f-4ad6-88a5-fc8df98d908d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.789756 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.789766 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knwtn\" (UniqueName: \"kubernetes.io/projected/1245271f-581f-4ad6-88a5-fc8df98d908d-kube-api-access-knwtn\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.789774 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.204622 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" event={"ID":"1245271f-581f-4ad6-88a5-fc8df98d908d","Type":"ContainerDied","Data":"46759adc3b7bb4c6ec47d6365be3d50922aace4685bdab03dae1a7603d72e695"} Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.205517 4793 scope.go:117] "RemoveContainer" containerID="2a7c84a7d77a4a992aaa084de64dbea7ab714ae6261878fd4f6f7001e5a8a24d" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.205689 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.210425 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" event={"ID":"7a11e909-7bd4-4e65-bd54-61a34e199fc8","Type":"ContainerDied","Data":"73cc7ac92e261ad0293e3752e2085be90be9ad023065c187bd33f01950036c6f"} Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.210504 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.231817 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.240110 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.249099 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b74cd585c-nn75n"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.249443 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b74cd585c-nn75n"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.406409 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1245271f-581f-4ad6-88a5-fc8df98d908d" path="/var/lib/kubelet/pods/1245271f-581f-4ad6-88a5-fc8df98d908d/volumes" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.407600 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a11e909-7bd4-4e65-bd54-61a34e199fc8" path="/var/lib/kubelet/pods/7a11e909-7bd4-4e65-bd54-61a34e199fc8/volumes" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425165 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b"] Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425392 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a11e909-7bd4-4e65-bd54-61a34e199fc8" containerName="controller-manager" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425409 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a11e909-7bd4-4e65-bd54-61a34e199fc8" containerName="controller-manager" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425420 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="extract-content" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425427 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="extract-content" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425438 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="extract-content" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425445 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="extract-content" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425453 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1245271f-581f-4ad6-88a5-fc8df98d908d" containerName="route-controller-manager" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425459 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1245271f-581f-4ad6-88a5-fc8df98d908d" containerName="route-controller-manager" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425469 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425476 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425487 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="extract-utilities" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425493 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="extract-utilities" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425504 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="extract-utilities" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425511 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="extract-utilities" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425688 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425701 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425710 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="extract-utilities" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425717 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="extract-utilities" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425725 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="extract-content" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425732 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="extract-content" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425743 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425750 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425854 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425866 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425879 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425889 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a11e909-7bd4-4e65-bd54-61a34e199fc8" containerName="controller-manager" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425897 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1245271f-581f-4ad6-88a5-fc8df98d908d" containerName="route-controller-manager" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.426373 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.428879 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7494b498cc-pw58f"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.429576 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.431288 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.431501 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.431759 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.431916 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.432243 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.432375 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.432516 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.435222 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.435405 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.435936 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.436484 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.436730 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.443912 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7494b498cc-pw58f"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.444846 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.454457 4793 scope.go:117] "RemoveContainer" containerID="e1188b4a585d96fdedf9930e72dec5ac8fd06f99633ce9ac5a9ab4c8d741f7be" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.458987 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.600033 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-client-ca\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.600427 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgfwf\" (UniqueName: \"kubernetes.io/projected/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-kube-api-access-pgfwf\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.600603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-client-ca\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.600735 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-config\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.600849 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-serving-cert\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.600978 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-proxy-ca-bundles\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.601125 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331250ca-4896-4db5-9193-0bc4014543aa-serving-cert\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.601804 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj76j\" (UniqueName: \"kubernetes.io/projected/331250ca-4896-4db5-9193-0bc4014543aa-kube-api-access-jj76j\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.601945 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-config\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.702828 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-client-ca\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.702890 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgfwf\" (UniqueName: \"kubernetes.io/projected/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-kube-api-access-pgfwf\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.702928 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-client-ca\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.702969 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-config\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.702987 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-serving-cert\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.703018 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-proxy-ca-bundles\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.703038 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331250ca-4896-4db5-9193-0bc4014543aa-serving-cert\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.703088 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj76j\" (UniqueName: \"kubernetes.io/projected/331250ca-4896-4db5-9193-0bc4014543aa-kube-api-access-jj76j\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.703114 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-config\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.704174 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-client-ca\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.704389 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-config\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.705500 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-config\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.296322 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-client-ca\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.296680 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331250ca-4896-4db5-9193-0bc4014543aa-serving-cert\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.297896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-proxy-ca-bundles\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.298584 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-serving-cert\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.299363 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj76j\" (UniqueName: \"kubernetes.io/projected/331250ca-4896-4db5-9193-0bc4014543aa-kube-api-access-jj76j\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.299591 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgfwf\" (UniqueName: \"kubernetes.io/projected/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-kube-api-access-pgfwf\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.356208 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.368485 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.580783 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b"] Jan 30 13:51:23 crc kubenswrapper[4793]: W0130 13:51:23.584371 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod331250ca_4896_4db5_9193_0bc4014543aa.slice/crio-2c5af4b42b4104017ffa38ef067d7472affee0ab8c8ae6656bb2b0ae3714df50 WatchSource:0}: Error finding container 2c5af4b42b4104017ffa38ef067d7472affee0ab8c8ae6656bb2b0ae3714df50: Status 404 returned error can't find the container with id 2c5af4b42b4104017ffa38ef067d7472affee0ab8c8ae6656bb2b0ae3714df50 Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.624395 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7494b498cc-pw58f"] Jan 30 13:51:24 crc kubenswrapper[4793]: I0130 13:51:24.224005 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" event={"ID":"f6b5259e-bc29-45fb-b54a-9ea88b2c9455","Type":"ContainerStarted","Data":"5b10f9fb8b30b6886a920ccc357efc6e18c777018ff73968b7b489e1cd955680"} Jan 30 13:51:24 crc kubenswrapper[4793]: I0130 13:51:24.225501 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" event={"ID":"331250ca-4896-4db5-9193-0bc4014543aa","Type":"ContainerStarted","Data":"2c5af4b42b4104017ffa38ef067d7472affee0ab8c8ae6656bb2b0ae3714df50"} Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.237811 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" event={"ID":"f6b5259e-bc29-45fb-b54a-9ea88b2c9455","Type":"ContainerStarted","Data":"5fd6a852dcf845aab42cc9dc74f3e773cd0bb7e06a1fd43cd8a36865b0b6cfb9"} Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.239537 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.241143 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" event={"ID":"331250ca-4896-4db5-9193-0bc4014543aa","Type":"ContainerStarted","Data":"bfd8ea71474cacbd139e6aa78a900da8a61bbb4015df2e4c9fa0f4dcc58743f6"} Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.241750 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.245955 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.249432 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.263953 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" podStartSLOduration=5.263935757 podStartE2EDuration="5.263935757s" podCreationTimestamp="2026-01-30 13:51:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:25.25755132 +0000 UTC m=+495.958899821" watchObservedRunningTime="2026-01-30 13:51:25.263935757 +0000 UTC m=+495.965284248" Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.296911 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" podStartSLOduration=5.296891749 podStartE2EDuration="5.296891749s" podCreationTimestamp="2026-01-30 13:51:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:25.291849167 +0000 UTC m=+495.993197668" watchObservedRunningTime="2026-01-30 13:51:25.296891749 +0000 UTC m=+495.998240240" Jan 30 13:51:28 crc kubenswrapper[4793]: I0130 13:51:28.549429 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2mcj"] Jan 30 13:51:40 crc kubenswrapper[4793]: I0130 13:51:40.942544 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7494b498cc-pw58f"] Jan 30 13:51:40 crc kubenswrapper[4793]: I0130 13:51:40.944261 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" podUID="f6b5259e-bc29-45fb-b54a-9ea88b2c9455" containerName="controller-manager" containerID="cri-o://5fd6a852dcf845aab42cc9dc74f3e773cd0bb7e06a1fd43cd8a36865b0b6cfb9" gracePeriod=30 Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.049885 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b"] Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.050495 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" podUID="331250ca-4896-4db5-9193-0bc4014543aa" containerName="route-controller-manager" containerID="cri-o://bfd8ea71474cacbd139e6aa78a900da8a61bbb4015df2e4c9fa0f4dcc58743f6" gracePeriod=30 Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.330216 4793 generic.go:334] "Generic (PLEG): container finished" podID="331250ca-4896-4db5-9193-0bc4014543aa" containerID="bfd8ea71474cacbd139e6aa78a900da8a61bbb4015df2e4c9fa0f4dcc58743f6" exitCode=0 Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.330282 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" event={"ID":"331250ca-4896-4db5-9193-0bc4014543aa","Type":"ContainerDied","Data":"bfd8ea71474cacbd139e6aa78a900da8a61bbb4015df2e4c9fa0f4dcc58743f6"} Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.333222 4793 generic.go:334] "Generic (PLEG): container finished" podID="f6b5259e-bc29-45fb-b54a-9ea88b2c9455" containerID="5fd6a852dcf845aab42cc9dc74f3e773cd0bb7e06a1fd43cd8a36865b0b6cfb9" exitCode=0 Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.333274 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" event={"ID":"f6b5259e-bc29-45fb-b54a-9ea88b2c9455","Type":"ContainerDied","Data":"5fd6a852dcf845aab42cc9dc74f3e773cd0bb7e06a1fd43cd8a36865b0b6cfb9"} Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.804689 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.935825 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-serving-cert\") pod \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.935889 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-client-ca\") pod \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.935960 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgfwf\" (UniqueName: \"kubernetes.io/projected/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-kube-api-access-pgfwf\") pod \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.936012 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-config\") pod \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.936107 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-proxy-ca-bundles\") pod \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.936665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-client-ca" (OuterVolumeSpecName: "client-ca") pod "f6b5259e-bc29-45fb-b54a-9ea88b2c9455" (UID: "f6b5259e-bc29-45fb-b54a-9ea88b2c9455"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.936794 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-config" (OuterVolumeSpecName: "config") pod "f6b5259e-bc29-45fb-b54a-9ea88b2c9455" (UID: "f6b5259e-bc29-45fb-b54a-9ea88b2c9455"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.937315 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.937333 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.937424 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f6b5259e-bc29-45fb-b54a-9ea88b2c9455" (UID: "f6b5259e-bc29-45fb-b54a-9ea88b2c9455"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.941216 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f6b5259e-bc29-45fb-b54a-9ea88b2c9455" (UID: "f6b5259e-bc29-45fb-b54a-9ea88b2c9455"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.944124 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-kube-api-access-pgfwf" (OuterVolumeSpecName: "kube-api-access-pgfwf") pod "f6b5259e-bc29-45fb-b54a-9ea88b2c9455" (UID: "f6b5259e-bc29-45fb-b54a-9ea88b2c9455"). InnerVolumeSpecName "kube-api-access-pgfwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.983343 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.037619 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331250ca-4896-4db5-9193-0bc4014543aa-serving-cert\") pod \"331250ca-4896-4db5-9193-0bc4014543aa\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.037860 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-config\") pod \"331250ca-4896-4db5-9193-0bc4014543aa\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.037931 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj76j\" (UniqueName: \"kubernetes.io/projected/331250ca-4896-4db5-9193-0bc4014543aa-kube-api-access-jj76j\") pod \"331250ca-4896-4db5-9193-0bc4014543aa\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.038027 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-client-ca\") pod \"331250ca-4896-4db5-9193-0bc4014543aa\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.038242 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.038303 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.038398 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgfwf\" (UniqueName: \"kubernetes.io/projected/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-kube-api-access-pgfwf\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.038944 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-client-ca" (OuterVolumeSpecName: "client-ca") pod "331250ca-4896-4db5-9193-0bc4014543aa" (UID: "331250ca-4896-4db5-9193-0bc4014543aa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.038975 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-config" (OuterVolumeSpecName: "config") pod "331250ca-4896-4db5-9193-0bc4014543aa" (UID: "331250ca-4896-4db5-9193-0bc4014543aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.041499 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331250ca-4896-4db5-9193-0bc4014543aa-kube-api-access-jj76j" (OuterVolumeSpecName: "kube-api-access-jj76j") pod "331250ca-4896-4db5-9193-0bc4014543aa" (UID: "331250ca-4896-4db5-9193-0bc4014543aa"). InnerVolumeSpecName "kube-api-access-jj76j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.042028 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/331250ca-4896-4db5-9193-0bc4014543aa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "331250ca-4896-4db5-9193-0bc4014543aa" (UID: "331250ca-4896-4db5-9193-0bc4014543aa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.139128 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331250ca-4896-4db5-9193-0bc4014543aa-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.139212 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.139226 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj76j\" (UniqueName: \"kubernetes.io/projected/331250ca-4896-4db5-9193-0bc4014543aa-kube-api-access-jj76j\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.139240 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.341100 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" event={"ID":"f6b5259e-bc29-45fb-b54a-9ea88b2c9455","Type":"ContainerDied","Data":"5b10f9fb8b30b6886a920ccc357efc6e18c777018ff73968b7b489e1cd955680"} Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.341165 4793 scope.go:117] "RemoveContainer" containerID="5fd6a852dcf845aab42cc9dc74f3e773cd0bb7e06a1fd43cd8a36865b0b6cfb9" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.341933 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.344844 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" event={"ID":"331250ca-4896-4db5-9193-0bc4014543aa","Type":"ContainerDied","Data":"2c5af4b42b4104017ffa38ef067d7472affee0ab8c8ae6656bb2b0ae3714df50"} Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.344928 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.379614 4793 scope.go:117] "RemoveContainer" containerID="bfd8ea71474cacbd139e6aa78a900da8a61bbb4015df2e4c9fa0f4dcc58743f6" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.382355 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.394558 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.406815 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="331250ca-4896-4db5-9193-0bc4014543aa" path="/var/lib/kubelet/pods/331250ca-4896-4db5-9193-0bc4014543aa/volumes" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.407376 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7494b498cc-pw58f"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.407417 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7494b498cc-pw58f"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.438211 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86cbff96d8-xtxlp"] Jan 30 13:51:42 crc kubenswrapper[4793]: E0130 13:51:42.438535 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="331250ca-4896-4db5-9193-0bc4014543aa" containerName="route-controller-manager" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.438550 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="331250ca-4896-4db5-9193-0bc4014543aa" containerName="route-controller-manager" Jan 30 13:51:42 crc kubenswrapper[4793]: E0130 13:51:42.438560 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b5259e-bc29-45fb-b54a-9ea88b2c9455" containerName="controller-manager" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.438566 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b5259e-bc29-45fb-b54a-9ea88b2c9455" containerName="controller-manager" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.438669 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b5259e-bc29-45fb-b54a-9ea88b2c9455" containerName="controller-manager" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.438683 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="331250ca-4896-4db5-9193-0bc4014543aa" containerName="route-controller-manager" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.439309 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.442111 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.442397 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.442558 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.442729 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.443014 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.443139 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.444285 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-client-ca\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.444313 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnwlb\" (UniqueName: \"kubernetes.io/projected/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-kube-api-access-bnwlb\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.444335 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-serving-cert\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.444378 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-proxy-ca-bundles\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.444409 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-config\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.451365 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.452439 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.453144 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.455539 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.455650 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.455719 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.455850 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.455931 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.455969 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.459927 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.466107 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86cbff96d8-xtxlp"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545481 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19a2f709-4d35-44f7-a44f-ab7a40157469-config\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545535 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdhlb\" (UniqueName: \"kubernetes.io/projected/19a2f709-4d35-44f7-a44f-ab7a40157469-kube-api-access-xdhlb\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545566 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19a2f709-4d35-44f7-a44f-ab7a40157469-serving-cert\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545591 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19a2f709-4d35-44f7-a44f-ab7a40157469-client-ca\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545645 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-client-ca\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545675 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnwlb\" (UniqueName: \"kubernetes.io/projected/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-kube-api-access-bnwlb\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545699 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-serving-cert\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545731 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-proxy-ca-bundles\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545765 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-config\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.547565 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-client-ca\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.548482 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-proxy-ca-bundles\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.549581 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-config\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.550822 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-serving-cert\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.567696 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnwlb\" (UniqueName: \"kubernetes.io/projected/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-kube-api-access-bnwlb\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.646653 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19a2f709-4d35-44f7-a44f-ab7a40157469-config\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.646718 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdhlb\" (UniqueName: \"kubernetes.io/projected/19a2f709-4d35-44f7-a44f-ab7a40157469-kube-api-access-xdhlb\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.646747 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19a2f709-4d35-44f7-a44f-ab7a40157469-serving-cert\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.646771 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19a2f709-4d35-44f7-a44f-ab7a40157469-client-ca\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.647629 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19a2f709-4d35-44f7-a44f-ab7a40157469-client-ca\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.648342 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19a2f709-4d35-44f7-a44f-ab7a40157469-config\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.650066 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19a2f709-4d35-44f7-a44f-ab7a40157469-serving-cert\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.667839 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdhlb\" (UniqueName: \"kubernetes.io/projected/19a2f709-4d35-44f7-a44f-ab7a40157469-kube-api-access-xdhlb\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.770070 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.781719 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.985459 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86cbff96d8-xtxlp"] Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.025170 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw"] Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.364311 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" event={"ID":"19a2f709-4d35-44f7-a44f-ab7a40157469","Type":"ContainerStarted","Data":"cec1fbbf6d73f1f7b56b5701008347818034a74cf6cec9e99744af4f5bd2e863"} Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.364353 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" event={"ID":"19a2f709-4d35-44f7-a44f-ab7a40157469","Type":"ContainerStarted","Data":"7b0092efa97ac65157c72d9464478a7355ff3c6b5b2f3e2fdf538ee99d4e5bf3"} Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.367415 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" event={"ID":"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8","Type":"ContainerStarted","Data":"72704c66c3729cb093fbbd41eeb70141ec2256d549d5988325a79c0dd98919c3"} Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.367443 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" event={"ID":"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8","Type":"ContainerStarted","Data":"a3508b306af17b26cb81b0c9ba3ee0eeb0a48915fe92361a1ce14cc6c384f368"} Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.368013 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.373891 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.402330 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" podStartSLOduration=3.402310729 podStartE2EDuration="3.402310729s" podCreationTimestamp="2026-01-30 13:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:43.389213186 +0000 UTC m=+514.090561697" watchObservedRunningTime="2026-01-30 13:51:43.402310729 +0000 UTC m=+514.103659230" Jan 30 13:51:44 crc kubenswrapper[4793]: I0130 13:51:44.376027 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:44 crc kubenswrapper[4793]: I0130 13:51:44.380850 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:44 crc kubenswrapper[4793]: I0130 13:51:44.398941 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" podStartSLOduration=3.398924348 podStartE2EDuration="3.398924348s" podCreationTimestamp="2026-01-30 13:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:44.396416453 +0000 UTC m=+515.097764954" watchObservedRunningTime="2026-01-30 13:51:44.398924348 +0000 UTC m=+515.100272839" Jan 30 13:51:44 crc kubenswrapper[4793]: I0130 13:51:44.410588 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b5259e-bc29-45fb-b54a-9ea88b2c9455" path="/var/lib/kubelet/pods/f6b5259e-bc29-45fb-b54a-9ea88b2c9455/volumes" Jan 30 13:51:53 crc kubenswrapper[4793]: I0130 13:51:53.579527 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" containerID="cri-o://2275a87f84b4ec94a142778010cf54bfc2388e423117a117dbf57f37d1a87794" gracePeriod=15 Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.434842 4793 generic.go:334] "Generic (PLEG): container finished" podID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerID="2275a87f84b4ec94a142778010cf54bfc2388e423117a117dbf57f37d1a87794" exitCode=0 Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.434895 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" event={"ID":"4a64abca-3318-4208-8edb-1474e0ba5f2f","Type":"ContainerDied","Data":"2275a87f84b4ec94a142778010cf54bfc2388e423117a117dbf57f37d1a87794"} Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.700266 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.746041 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz"] Jan 30 13:51:54 crc kubenswrapper[4793]: E0130 13:51:54.746360 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.746381 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.746483 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.746965 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.751351 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz"] Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.821989 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-session\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822449 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vhgb\" (UniqueName: \"kubernetes.io/projected/4a64abca-3318-4208-8edb-1474e0ba5f2f-kube-api-access-4vhgb\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822498 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-dir\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822531 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-router-certs\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822563 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-serving-cert\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822576 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822589 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-policies\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822629 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-idp-0-file-data\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822654 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-service-ca\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822681 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-cliconfig\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822702 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-error\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822725 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-trusted-ca-bundle\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822746 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-login\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822776 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-provider-selection\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822799 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-ocp-branding-template\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822875 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822903 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzvn6\" (UniqueName: \"kubernetes.io/projected/d0b6d37a-e922-4801-b3ef-78204821353f-kube-api-access-kzvn6\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822924 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-audit-policies\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822963 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822983 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-session\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823013 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823034 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-error\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823080 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823106 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0b6d37a-e922-4801-b3ef-78204821353f-audit-dir\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823140 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823160 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-login\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823189 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823210 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823255 4793 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823334 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823739 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.824238 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.825463 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.828291 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.828573 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.828663 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a64abca-3318-4208-8edb-1474e0ba5f2f-kube-api-access-4vhgb" (OuterVolumeSpecName: "kube-api-access-4vhgb") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "kube-api-access-4vhgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.828920 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.829139 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.829295 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.830260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.830620 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.838668 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.923836 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.923901 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-login\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.923929 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.923944 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzvn6\" (UniqueName: \"kubernetes.io/projected/d0b6d37a-e922-4801-b3ef-78204821353f-kube-api-access-kzvn6\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924038 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-audit-policies\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924076 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924092 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924108 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-session\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924170 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-error\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924203 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924237 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0b6d37a-e922-4801-b3ef-78204821353f-audit-dir\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924286 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924296 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924306 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924315 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924324 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924334 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924344 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924355 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924365 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924373 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vhgb\" (UniqueName: \"kubernetes.io/projected/4a64abca-3318-4208-8edb-1474e0ba5f2f-kube-api-access-4vhgb\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924382 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924390 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924398 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924627 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0b6d37a-e922-4801-b3ef-78204821353f-audit-dir\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.925711 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.927811 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.927846 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-login\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.928251 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.928267 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.928501 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.928606 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-audit-policies\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.928860 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.929125 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-session\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.929867 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.929944 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.932509 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-error\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.946091 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzvn6\" (UniqueName: \"kubernetes.io/projected/d0b6d37a-e922-4801-b3ef-78204821353f-kube-api-access-kzvn6\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.067226 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.314191 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz"] Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.443068 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" event={"ID":"4a64abca-3318-4208-8edb-1474e0ba5f2f","Type":"ContainerDied","Data":"0e39fca869bb577560ccf5c5e0fd7294441d98f691e7a0b7c896fff632efcbeb"} Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.443113 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.443129 4793 scope.go:117] "RemoveContainer" containerID="2275a87f84b4ec94a142778010cf54bfc2388e423117a117dbf57f37d1a87794" Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.447935 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" event={"ID":"d0b6d37a-e922-4801-b3ef-78204821353f","Type":"ContainerStarted","Data":"bbfcddcabfb6a27a0277b67b3c2861ba194b2dde5aeaa47c2123bc529e8a0e4f"} Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.477553 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2mcj"] Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.481000 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2mcj"] Jan 30 13:51:55 crc kubenswrapper[4793]: E0130 13:51:55.568907 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a64abca_3318_4208_8edb_1474e0ba5f2f.slice/crio-0e39fca869bb577560ccf5c5e0fd7294441d98f691e7a0b7c896fff632efcbeb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a64abca_3318_4208_8edb_1474e0ba5f2f.slice\": RecentStats: unable to find data in memory cache]" Jan 30 13:51:56 crc kubenswrapper[4793]: I0130 13:51:56.414333 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" path="/var/lib/kubelet/pods/4a64abca-3318-4208-8edb-1474e0ba5f2f/volumes" Jan 30 13:51:56 crc kubenswrapper[4793]: I0130 13:51:56.463542 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" event={"ID":"d0b6d37a-e922-4801-b3ef-78204821353f","Type":"ContainerStarted","Data":"a4995f51bf42afc49c864cd27829050a7585e6c40004540a25dd60f10256140a"} Jan 30 13:51:56 crc kubenswrapper[4793]: I0130 13:51:56.464210 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:56 crc kubenswrapper[4793]: I0130 13:51:56.474478 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:56 crc kubenswrapper[4793]: I0130 13:51:56.515026 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" podStartSLOduration=28.515011313 podStartE2EDuration="28.515011313s" podCreationTimestamp="2026-01-30 13:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:56.488331764 +0000 UTC m=+527.189680265" watchObservedRunningTime="2026-01-30 13:51:56.515011313 +0000 UTC m=+527.216359804" Jan 30 13:52:13 crc kubenswrapper[4793]: I0130 13:52:13.537295 4793 scope.go:117] "RemoveContainer" containerID="9fce52fd4df200cd47b1ec015ae5f6e141a21db87359d7fd523e3ede8826e2ec" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.320847 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g9t8x"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.322582 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g9t8x" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="registry-server" containerID="cri-o://393188ba22f128de9c0a011df4faebd2b1d1eb0a5b1ea461fc46bcc26c5a26e1" gracePeriod=30 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.324722 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6qnl2"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.325110 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6qnl2" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="registry-server" containerID="cri-o://84cd655416136fa3e73cac54a43941e805b3e648275563df361a78561fee0a01" gracePeriod=30 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.344191 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zd5lq"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.344693 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" containerID="cri-o://12a6dc8d1fe12e66c88c1e9af34c91aecbf032c69850554757bd6c716f87e793" gracePeriod=30 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.365342 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvlgd"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.365848 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kvlgd" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="registry-server" containerID="cri-o://539c3853e42d9d22bfa167a67e472131adad4bd97a97c725d04b9f2fb5b89b55" gracePeriod=30 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.376758 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zkjbp"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.378303 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.386768 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vn6kf"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.387067 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vn6kf" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="registry-server" containerID="cri-o://04cab8777968c78ddbe77df944f0557b099be348daaec3a0b9ff7c7f4c0c511b" gracePeriod=30 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.447373 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zkjbp"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.547458 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5834bf4b-676f-4ece-bcee-28949a7109ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.547787 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5834bf4b-676f-4ece-bcee-28949a7109ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.547974 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsdcw\" (UniqueName: \"kubernetes.io/projected/5834bf4b-676f-4ece-bcee-28949a7109ca-kube-api-access-fsdcw\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.649235 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5834bf4b-676f-4ece-bcee-28949a7109ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.649285 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5834bf4b-676f-4ece-bcee-28949a7109ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.649329 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsdcw\" (UniqueName: \"kubernetes.io/projected/5834bf4b-676f-4ece-bcee-28949a7109ca-kube-api-access-fsdcw\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.650825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5834bf4b-676f-4ece-bcee-28949a7109ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.655362 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5834bf4b-676f-4ece-bcee-28949a7109ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.664665 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsdcw\" (UniqueName: \"kubernetes.io/projected/5834bf4b-676f-4ece-bcee-28949a7109ca-kube-api-access-fsdcw\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.816443 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.949657 4793 generic.go:334] "Generic (PLEG): container finished" podID="89a43c58-d327-429a-96cd-9f9f5393368a" containerID="04cab8777968c78ddbe77df944f0557b099be348daaec3a0b9ff7c7f4c0c511b" exitCode=0 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.949890 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerDied","Data":"04cab8777968c78ddbe77df944f0557b099be348daaec3a0b9ff7c7f4c0c511b"} Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.955545 4793 generic.go:334] "Generic (PLEG): container finished" podID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerID="539c3853e42d9d22bfa167a67e472131adad4bd97a97c725d04b9f2fb5b89b55" exitCode=0 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.955722 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvlgd" event={"ID":"08b55ba0-087d-42ec-a0c5-538f0a3c0987","Type":"ContainerDied","Data":"539c3853e42d9d22bfa167a67e472131adad4bd97a97c725d04b9f2fb5b89b55"} Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.958882 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/3.log" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.958946 4793 generic.go:334] "Generic (PLEG): container finished" podID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerID="12a6dc8d1fe12e66c88c1e9af34c91aecbf032c69850554757bd6c716f87e793" exitCode=0 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.959326 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerDied","Data":"12a6dc8d1fe12e66c88c1e9af34c91aecbf032c69850554757bd6c716f87e793"} Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.959396 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.973234 4793 generic.go:334] "Generic (PLEG): container finished" podID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerID="84cd655416136fa3e73cac54a43941e805b3e648275563df361a78561fee0a01" exitCode=0 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.973303 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerDied","Data":"84cd655416136fa3e73cac54a43941e805b3e648275563df361a78561fee0a01"} Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.979071 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerDied","Data":"393188ba22f128de9c0a011df4faebd2b1d1eb0a5b1ea461fc46bcc26c5a26e1"} Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.979036 4793 generic.go:334] "Generic (PLEG): container finished" podID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerID="393188ba22f128de9c0a011df4faebd2b1d1eb0a5b1ea461fc46bcc26c5a26e1" exitCode=0 Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.261013 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zkjbp"] Jan 30 13:52:20 crc kubenswrapper[4793]: W0130 13:52:20.291320 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5834bf4b_676f_4ece_bcee_28949a7109ca.slice/crio-5da7a1e8e45df963d762476724080c5153a328af9a9a0e9890defec8c6bf8ae5 WatchSource:0}: Error finding container 5da7a1e8e45df963d762476724080c5153a328af9a9a0e9890defec8c6bf8ae5: Status 404 returned error can't find the container with id 5da7a1e8e45df963d762476724080c5153a328af9a9a0e9890defec8c6bf8ae5 Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.365904 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.447737 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.530440 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.537569 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.541347 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.559577 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-catalog-content\") pod \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.559619 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwrln\" (UniqueName: \"kubernetes.io/projected/89a43c58-d327-429a-96cd-9f9f5393368a-kube-api-access-pwrln\") pod \"89a43c58-d327-429a-96cd-9f9f5393368a\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.559657 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-catalog-content\") pod \"89a43c58-d327-429a-96cd-9f9f5393368a\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.559687 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-utilities\") pod \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.559716 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-utilities\") pod \"89a43c58-d327-429a-96cd-9f9f5393368a\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.559748 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhvt4\" (UniqueName: \"kubernetes.io/projected/08b55ba0-087d-42ec-a0c5-538f0a3c0987-kube-api-access-nhvt4\") pod \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.565715 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b55ba0-087d-42ec-a0c5-538f0a3c0987-kube-api-access-nhvt4" (OuterVolumeSpecName: "kube-api-access-nhvt4") pod "08b55ba0-087d-42ec-a0c5-538f0a3c0987" (UID: "08b55ba0-087d-42ec-a0c5-538f0a3c0987"). InnerVolumeSpecName "kube-api-access-nhvt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.567037 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-utilities" (OuterVolumeSpecName: "utilities") pod "08b55ba0-087d-42ec-a0c5-538f0a3c0987" (UID: "08b55ba0-087d-42ec-a0c5-538f0a3c0987"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.582165 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-utilities" (OuterVolumeSpecName: "utilities") pod "89a43c58-d327-429a-96cd-9f9f5393368a" (UID: "89a43c58-d327-429a-96cd-9f9f5393368a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.584540 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a43c58-d327-429a-96cd-9f9f5393368a-kube-api-access-pwrln" (OuterVolumeSpecName: "kube-api-access-pwrln") pod "89a43c58-d327-429a-96cd-9f9f5393368a" (UID: "89a43c58-d327-429a-96cd-9f9f5393368a"). InnerVolumeSpecName "kube-api-access-pwrln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.587425 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08b55ba0-087d-42ec-a0c5-538f0a3c0987" (UID: "08b55ba0-087d-42ec-a0c5-538f0a3c0987"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662318 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh6ft\" (UniqueName: \"kubernetes.io/projected/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-kube-api-access-sh6ft\") pod \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662401 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-operator-metrics\") pod \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662421 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9nnp\" (UniqueName: \"kubernetes.io/projected/840c8b00-73a4-4378-b5a8-83f2595916a4-kube-api-access-p9nnp\") pod \"840c8b00-73a4-4378-b5a8-83f2595916a4\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662482 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-utilities\") pod \"840c8b00-73a4-4378-b5a8-83f2595916a4\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662537 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-catalog-content\") pod \"b34660b0-a161-4587-96a6-1a86a2e3f632\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662560 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-trusted-ca\") pod \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662574 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg5zv\" (UniqueName: \"kubernetes.io/projected/b34660b0-a161-4587-96a6-1a86a2e3f632-kube-api-access-zg5zv\") pod \"b34660b0-a161-4587-96a6-1a86a2e3f632\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662641 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-catalog-content\") pod \"840c8b00-73a4-4378-b5a8-83f2595916a4\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662668 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-utilities\") pod \"b34660b0-a161-4587-96a6-1a86a2e3f632\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.663520 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwrln\" (UniqueName: \"kubernetes.io/projected/89a43c58-d327-429a-96cd-9f9f5393368a-kube-api-access-pwrln\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.663547 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.663558 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.663612 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhvt4\" (UniqueName: \"kubernetes.io/projected/08b55ba0-087d-42ec-a0c5-538f0a3c0987-kube-api-access-nhvt4\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.663624 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.666438 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-utilities" (OuterVolumeSpecName: "utilities") pod "b34660b0-a161-4587-96a6-1a86a2e3f632" (UID: "b34660b0-a161-4587-96a6-1a86a2e3f632"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.671866 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-utilities" (OuterVolumeSpecName: "utilities") pod "840c8b00-73a4-4378-b5a8-83f2595916a4" (UID: "840c8b00-73a4-4378-b5a8-83f2595916a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.672459 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ee8452f4-fe2b-44d0-a26a-f7171e108fc9" (UID: "ee8452f4-fe2b-44d0-a26a-f7171e108fc9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.680259 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34660b0-a161-4587-96a6-1a86a2e3f632-kube-api-access-zg5zv" (OuterVolumeSpecName: "kube-api-access-zg5zv") pod "b34660b0-a161-4587-96a6-1a86a2e3f632" (UID: "b34660b0-a161-4587-96a6-1a86a2e3f632"). InnerVolumeSpecName "kube-api-access-zg5zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.686186 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ee8452f4-fe2b-44d0-a26a-f7171e108fc9" (UID: "ee8452f4-fe2b-44d0-a26a-f7171e108fc9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.707594 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/840c8b00-73a4-4378-b5a8-83f2595916a4-kube-api-access-p9nnp" (OuterVolumeSpecName: "kube-api-access-p9nnp") pod "840c8b00-73a4-4378-b5a8-83f2595916a4" (UID: "840c8b00-73a4-4378-b5a8-83f2595916a4"). InnerVolumeSpecName "kube-api-access-p9nnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.715178 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-kube-api-access-sh6ft" (OuterVolumeSpecName: "kube-api-access-sh6ft") pod "ee8452f4-fe2b-44d0-a26a-f7171e108fc9" (UID: "ee8452f4-fe2b-44d0-a26a-f7171e108fc9"). InnerVolumeSpecName "kube-api-access-sh6ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.728531 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b34660b0-a161-4587-96a6-1a86a2e3f632" (UID: "b34660b0-a161-4587-96a6-1a86a2e3f632"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.735629 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89a43c58-d327-429a-96cd-9f9f5393368a" (UID: "89a43c58-d327-429a-96cd-9f9f5393368a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.751756 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "840c8b00-73a4-4378-b5a8-83f2595916a4" (UID: "840c8b00-73a4-4378-b5a8-83f2595916a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765195 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765228 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765240 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh6ft\" (UniqueName: \"kubernetes.io/projected/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-kube-api-access-sh6ft\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765254 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765263 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9nnp\" (UniqueName: \"kubernetes.io/projected/840c8b00-73a4-4378-b5a8-83f2595916a4-kube-api-access-p9nnp\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765271 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765279 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765287 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765295 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765303 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg5zv\" (UniqueName: \"kubernetes.io/projected/b34660b0-a161-4587-96a6-1a86a2e3f632-kube-api-access-zg5zv\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.985348 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvlgd" event={"ID":"08b55ba0-087d-42ec-a0c5-538f0a3c0987","Type":"ContainerDied","Data":"e438cc892f7ad0406801bd88b27ea7d9474a125c514f11d8ac2ab76f42215f27"} Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.985400 4793 scope.go:117] "RemoveContainer" containerID="539c3853e42d9d22bfa167a67e472131adad4bd97a97c725d04b9f2fb5b89b55" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.985884 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.986639 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerDied","Data":"97c187117ac894b4f40744eaace0837c1dade5f185e1a06955e03936c650d6b8"} Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.986725 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.988971 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerDied","Data":"c106e074002678528ae31ccdf1bb58932690b2a742055da2e9f297d7f5cc6c7c"} Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.989094 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.995232 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.995263 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerDied","Data":"0e22ed488b0d95eaf0cf80ba9106bf9da157b5ab0630c5fce06e88b1a1a7e207"} Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.996393 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" event={"ID":"5834bf4b-676f-4ece-bcee-28949a7109ca","Type":"ContainerStarted","Data":"b8fcf2220c6b92f86f590aee94530fd0f54a302ad02a5fa5cce8ea811b739ea5"} Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.996434 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" event={"ID":"5834bf4b-676f-4ece-bcee-28949a7109ca","Type":"ContainerStarted","Data":"5da7a1e8e45df963d762476724080c5153a328af9a9a0e9890defec8c6bf8ae5"} Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.996727 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.998829 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.001388 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerDied","Data":"1f4643d93c77f9c1fa9d15f80b1a4b9e9c2ad2fc279deeae64b1715da148c011"} Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.001456 4793 scope.go:117] "RemoveContainer" containerID="a39b5636265cc040beb743a7d92b7de07f6a61cbb255d62d9adbf1ef86fd75b0" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.001457 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.023819 4793 scope.go:117] "RemoveContainer" containerID="bf4b42ce53f022eba5077f61f642433a8e1373279291fcdbe9bff308d17c0e0d" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.065981 4793 scope.go:117] "RemoveContainer" containerID="12a6dc8d1fe12e66c88c1e9af34c91aecbf032c69850554757bd6c716f87e793" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.070295 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" podStartSLOduration=2.070263998 podStartE2EDuration="2.070263998s" podCreationTimestamp="2026-01-30 13:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:52:21.02946036 +0000 UTC m=+551.730808851" watchObservedRunningTime="2026-01-30 13:52:21.070263998 +0000 UTC m=+551.771612489" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.099912 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvlgd"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.109315 4793 scope.go:117] "RemoveContainer" containerID="84cd655416136fa3e73cac54a43941e805b3e648275563df361a78561fee0a01" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.116092 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvlgd"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.130644 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g9t8x"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.148007 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g9t8x"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.154399 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zd5lq"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.169576 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zd5lq"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.175701 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6qnl2"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.180098 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6qnl2"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.180465 4793 scope.go:117] "RemoveContainer" containerID="3991b8c8da8221b7422f215779cd2c7fe6fecd1213e2421f8f1c4e3c851baccd" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.184362 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vn6kf"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.188404 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vn6kf"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.212730 4793 scope.go:117] "RemoveContainer" containerID="f652789a637248503c2fc91700a36ad3f9de2a0dc0aa687e53dccfa3f8c0a8b5" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.234220 4793 scope.go:117] "RemoveContainer" containerID="393188ba22f128de9c0a011df4faebd2b1d1eb0a5b1ea461fc46bcc26c5a26e1" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.255547 4793 scope.go:117] "RemoveContainer" containerID="0a9be6fb1fc0d8a14f1edca7b047f49698da2a9d4b0fc318118d31f74ad0506a" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.312721 4793 scope.go:117] "RemoveContainer" containerID="3b482005c537462a0ede36ab68d9d608d2121842b0870338080990e3d66e4059" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.336757 4793 scope.go:117] "RemoveContainer" containerID="04cab8777968c78ddbe77df944f0557b099be348daaec3a0b9ff7c7f4c0c511b" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.359279 4793 scope.go:117] "RemoveContainer" containerID="17de5c4fa1f8a1615ce34e313bf58b61c0d69abdba7886409d1567e3fa60d503" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.379141 4793 scope.go:117] "RemoveContainer" containerID="1292ed33cb4910e7379d650e9bdaa57110f788906801a44590e292cca7705790" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131287 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgznc"] Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131543 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131560 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131570 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131578 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131593 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131600 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131609 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131616 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131628 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131635 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131659 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131670 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131678 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131685 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131695 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131702 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131712 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131720 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131729 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131736 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131746 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131752 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131760 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131767 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131777 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131784 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131795 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131801 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131813 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131820 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131831 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131840 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131945 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131959 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131971 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131980 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131988 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131999 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.132010 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.132132 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.132143 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.132239 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.132250 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.136022 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.138107 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.142713 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgznc"] Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.281345 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8rmm\" (UniqueName: \"kubernetes.io/projected/79353c7a-f5cf-43e5-9c5a-443565d0cca7-kube-api-access-b8rmm\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.281932 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79353c7a-f5cf-43e5-9c5a-443565d0cca7-utilities\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.282114 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79353c7a-f5cf-43e5-9c5a-443565d0cca7-catalog-content\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.383392 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79353c7a-f5cf-43e5-9c5a-443565d0cca7-utilities\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.383908 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79353c7a-f5cf-43e5-9c5a-443565d0cca7-utilities\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.383918 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79353c7a-f5cf-43e5-9c5a-443565d0cca7-catalog-content\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.384010 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8rmm\" (UniqueName: \"kubernetes.io/projected/79353c7a-f5cf-43e5-9c5a-443565d0cca7-kube-api-access-b8rmm\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.384738 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79353c7a-f5cf-43e5-9c5a-443565d0cca7-catalog-content\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.407360 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8rmm\" (UniqueName: \"kubernetes.io/projected/79353c7a-f5cf-43e5-9c5a-443565d0cca7-kube-api-access-b8rmm\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.409405 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" path="/var/lib/kubelet/pods/08b55ba0-087d-42ec-a0c5-538f0a3c0987/volumes" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.410202 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" path="/var/lib/kubelet/pods/840c8b00-73a4-4378-b5a8-83f2595916a4/volumes" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.410911 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" path="/var/lib/kubelet/pods/89a43c58-d327-429a-96cd-9f9f5393368a/volumes" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.412123 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" path="/var/lib/kubelet/pods/b34660b0-a161-4587-96a6-1a86a2e3f632/volumes" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.412843 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" path="/var/lib/kubelet/pods/ee8452f4-fe2b-44d0-a26a-f7171e108fc9/volumes" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.458259 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.629681 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgznc"] Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.021255 4793 generic.go:334] "Generic (PLEG): container finished" podID="79353c7a-f5cf-43e5-9c5a-443565d0cca7" containerID="930e82898eecd32747e439313325fb5db69a9f46a5de40cf183e52e534aee9ca" exitCode=0 Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.021395 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgznc" event={"ID":"79353c7a-f5cf-43e5-9c5a-443565d0cca7","Type":"ContainerDied","Data":"930e82898eecd32747e439313325fb5db69a9f46a5de40cf183e52e534aee9ca"} Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.021433 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgznc" event={"ID":"79353c7a-f5cf-43e5-9c5a-443565d0cca7","Type":"ContainerStarted","Data":"316fa15aff1fca6d3d61f0e1f08e0c576e6fa49e0a4f5c9f26ce65b8a69939f8"} Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.023339 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.130441 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t5rxw"] Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.131826 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.133907 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.174726 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5rxw"] Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.299323 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be7bc1b-60e4-429d-b706-90063b00442e-catalog-content\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.299449 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be7bc1b-60e4-429d-b706-90063b00442e-utilities\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.299483 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkxbh\" (UniqueName: \"kubernetes.io/projected/6be7bc1b-60e4-429d-b706-90063b00442e-kube-api-access-nkxbh\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.400254 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be7bc1b-60e4-429d-b706-90063b00442e-catalog-content\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.400347 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be7bc1b-60e4-429d-b706-90063b00442e-utilities\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.400366 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkxbh\" (UniqueName: \"kubernetes.io/projected/6be7bc1b-60e4-429d-b706-90063b00442e-kube-api-access-nkxbh\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.400966 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be7bc1b-60e4-429d-b706-90063b00442e-utilities\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.402216 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be7bc1b-60e4-429d-b706-90063b00442e-catalog-content\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.426453 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkxbh\" (UniqueName: \"kubernetes.io/projected/6be7bc1b-60e4-429d-b706-90063b00442e-kube-api-access-nkxbh\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.462963 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.641629 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5rxw"] Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.028731 4793 generic.go:334] "Generic (PLEG): container finished" podID="6be7bc1b-60e4-429d-b706-90063b00442e" containerID="aca04a4f1f2617025f87dff79f4716691f846f7673daa7e5d04c273110c42170" exitCode=0 Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.028775 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5rxw" event={"ID":"6be7bc1b-60e4-429d-b706-90063b00442e","Type":"ContainerDied","Data":"aca04a4f1f2617025f87dff79f4716691f846f7673daa7e5d04c273110c42170"} Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.028799 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5rxw" event={"ID":"6be7bc1b-60e4-429d-b706-90063b00442e","Type":"ContainerStarted","Data":"cb97b0929a7fa2cd74a9d4cf8809ccbd3fb47f01a4dd388a5e6cb18f2c97e1f3"} Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.531426 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lcb4v"] Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.532811 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.537661 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.558614 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lcb4v"] Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.715317 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvntf\" (UniqueName: \"kubernetes.io/projected/adcaff8e-ed88-4fa1-af55-aedc60d35481-kube-api-access-cvntf\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.715379 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adcaff8e-ed88-4fa1-af55-aedc60d35481-catalog-content\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.715499 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adcaff8e-ed88-4fa1-af55-aedc60d35481-utilities\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.816787 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvntf\" (UniqueName: \"kubernetes.io/projected/adcaff8e-ed88-4fa1-af55-aedc60d35481-kube-api-access-cvntf\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.817341 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adcaff8e-ed88-4fa1-af55-aedc60d35481-catalog-content\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.817512 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adcaff8e-ed88-4fa1-af55-aedc60d35481-utilities\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.817894 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adcaff8e-ed88-4fa1-af55-aedc60d35481-catalog-content\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.818139 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adcaff8e-ed88-4fa1-af55-aedc60d35481-utilities\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.841896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvntf\" (UniqueName: \"kubernetes.io/projected/adcaff8e-ed88-4fa1-af55-aedc60d35481-kube-api-access-cvntf\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.871948 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.038712 4793 generic.go:334] "Generic (PLEG): container finished" podID="79353c7a-f5cf-43e5-9c5a-443565d0cca7" containerID="9b700715fdd4398f415461325325bd61f69b964ffd1362b02505fc5cc9b8afe1" exitCode=0 Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.039147 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgznc" event={"ID":"79353c7a-f5cf-43e5-9c5a-443565d0cca7","Type":"ContainerDied","Data":"9b700715fdd4398f415461325325bd61f69b964ffd1362b02505fc5cc9b8afe1"} Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.177910 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lcb4v"] Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.535817 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-67xsr"] Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.537703 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.539770 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.546168 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-67xsr"] Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.628562 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdkvd\" (UniqueName: \"kubernetes.io/projected/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-kube-api-access-rdkvd\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.628629 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-utilities\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.628707 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-catalog-content\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.730612 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-catalog-content\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.730666 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdkvd\" (UniqueName: \"kubernetes.io/projected/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-kube-api-access-rdkvd\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.730685 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-utilities\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.731200 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-catalog-content\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.731247 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-utilities\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.764243 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdkvd\" (UniqueName: \"kubernetes.io/projected/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-kube-api-access-rdkvd\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.875096 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: E0130 13:52:25.913255 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6be7bc1b_60e4_429d_b706_90063b00442e.slice/crio-conmon-47e990fbb80040cf69648b7b7c078b3963a143cb2e576f475cf3b07883f90d34.scope\": RecentStats: unable to find data in memory cache]" Jan 30 13:52:26 crc kubenswrapper[4793]: I0130 13:52:26.047696 4793 generic.go:334] "Generic (PLEG): container finished" podID="6be7bc1b-60e4-429d-b706-90063b00442e" containerID="47e990fbb80040cf69648b7b7c078b3963a143cb2e576f475cf3b07883f90d34" exitCode=0 Jan 30 13:52:26 crc kubenswrapper[4793]: I0130 13:52:26.047772 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5rxw" event={"ID":"6be7bc1b-60e4-429d-b706-90063b00442e","Type":"ContainerDied","Data":"47e990fbb80040cf69648b7b7c078b3963a143cb2e576f475cf3b07883f90d34"} Jan 30 13:52:26 crc kubenswrapper[4793]: I0130 13:52:26.055061 4793 generic.go:334] "Generic (PLEG): container finished" podID="adcaff8e-ed88-4fa1-af55-aedc60d35481" containerID="a69823748d7cafe556ac4bb75e41342c6daf8cb5c0d166ea11440a37e56fac38" exitCode=0 Jan 30 13:52:26 crc kubenswrapper[4793]: I0130 13:52:26.055098 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcb4v" event={"ID":"adcaff8e-ed88-4fa1-af55-aedc60d35481","Type":"ContainerDied","Data":"a69823748d7cafe556ac4bb75e41342c6daf8cb5c0d166ea11440a37e56fac38"} Jan 30 13:52:26 crc kubenswrapper[4793]: I0130 13:52:26.055120 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcb4v" event={"ID":"adcaff8e-ed88-4fa1-af55-aedc60d35481","Type":"ContainerStarted","Data":"bf56ebe8af3ddb557a4352e48c282d6e46aeb85d3b9b270adfeaa714aef5b418"} Jan 30 13:52:26 crc kubenswrapper[4793]: W0130 13:52:26.245765 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a0cd3b8_afdf_4eb1_b818_565ce4d0647d.slice/crio-a366037dfbabfaf472a62671180ed50cb056d4acc52d227c689f195003e16b38 WatchSource:0}: Error finding container a366037dfbabfaf472a62671180ed50cb056d4acc52d227c689f195003e16b38: Status 404 returned error can't find the container with id a366037dfbabfaf472a62671180ed50cb056d4acc52d227c689f195003e16b38 Jan 30 13:52:26 crc kubenswrapper[4793]: I0130 13:52:26.246015 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-67xsr"] Jan 30 13:52:27 crc kubenswrapper[4793]: I0130 13:52:27.061802 4793 generic.go:334] "Generic (PLEG): container finished" podID="4a0cd3b8-afdf-4eb1-b818-565ce4d0647d" containerID="0cd9a1b7c5c52728ff5e012bf166e9b2ed9f732690a3ba82987c58f8a440a01b" exitCode=0 Jan 30 13:52:27 crc kubenswrapper[4793]: I0130 13:52:27.061984 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67xsr" event={"ID":"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d","Type":"ContainerDied","Data":"0cd9a1b7c5c52728ff5e012bf166e9b2ed9f732690a3ba82987c58f8a440a01b"} Jan 30 13:52:27 crc kubenswrapper[4793]: I0130 13:52:27.062199 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67xsr" event={"ID":"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d","Type":"ContainerStarted","Data":"a366037dfbabfaf472a62671180ed50cb056d4acc52d227c689f195003e16b38"} Jan 30 13:52:27 crc kubenswrapper[4793]: I0130 13:52:27.064948 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgznc" event={"ID":"79353c7a-f5cf-43e5-9c5a-443565d0cca7","Type":"ContainerStarted","Data":"6cc3f4a77ecb1125601f957830603c5160f420d3df61316dbe693a785008f6f6"} Jan 30 13:52:27 crc kubenswrapper[4793]: I0130 13:52:27.101006 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgznc" podStartSLOduration=2.104613174 podStartE2EDuration="5.100989404s" podCreationTimestamp="2026-01-30 13:52:22 +0000 UTC" firstStartedPulling="2026-01-30 13:52:23.022995484 +0000 UTC m=+553.724343975" lastFinishedPulling="2026-01-30 13:52:26.019371714 +0000 UTC m=+556.720720205" observedRunningTime="2026-01-30 13:52:27.098561092 +0000 UTC m=+557.799909593" watchObservedRunningTime="2026-01-30 13:52:27.100989404 +0000 UTC m=+557.802337895" Jan 30 13:52:29 crc kubenswrapper[4793]: I0130 13:52:29.076606 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5rxw" event={"ID":"6be7bc1b-60e4-429d-b706-90063b00442e","Type":"ContainerStarted","Data":"c0284e5136e09cf729226e342eaaf5612bc1f32f83f8b477abd5086512267844"} Jan 30 13:52:32 crc kubenswrapper[4793]: I0130 13:52:32.458853 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:32 crc kubenswrapper[4793]: I0130 13:52:32.459542 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:32 crc kubenswrapper[4793]: I0130 13:52:32.508628 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:32 crc kubenswrapper[4793]: I0130 13:52:32.541958 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t5rxw" podStartSLOduration=5.366903919 podStartE2EDuration="9.541937715s" podCreationTimestamp="2026-01-30 13:52:23 +0000 UTC" firstStartedPulling="2026-01-30 13:52:24.030170711 +0000 UTC m=+554.731519212" lastFinishedPulling="2026-01-30 13:52:28.205204517 +0000 UTC m=+558.906553008" observedRunningTime="2026-01-30 13:52:30.103584191 +0000 UTC m=+560.804932682" watchObservedRunningTime="2026-01-30 13:52:32.541937715 +0000 UTC m=+563.243286206" Jan 30 13:52:33 crc kubenswrapper[4793]: I0130 13:52:33.099169 4793 generic.go:334] "Generic (PLEG): container finished" podID="adcaff8e-ed88-4fa1-af55-aedc60d35481" containerID="42b2099c6c78fdddab0dab33f7a437e712ef0090700cd534c972f42d6ab5e5e7" exitCode=0 Jan 30 13:52:33 crc kubenswrapper[4793]: I0130 13:52:33.099371 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcb4v" event={"ID":"adcaff8e-ed88-4fa1-af55-aedc60d35481","Type":"ContainerDied","Data":"42b2099c6c78fdddab0dab33f7a437e712ef0090700cd534c972f42d6ab5e5e7"} Jan 30 13:52:33 crc kubenswrapper[4793]: I0130 13:52:33.140145 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:33 crc kubenswrapper[4793]: I0130 13:52:33.463486 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:33 crc kubenswrapper[4793]: I0130 13:52:33.463549 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.107741 4793 generic.go:334] "Generic (PLEG): container finished" podID="4a0cd3b8-afdf-4eb1-b818-565ce4d0647d" containerID="90129e008a4dc89b51e60eb13c1d26e28f5c7cdce257c5589da14191ad251cb2" exitCode=0 Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.108290 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67xsr" event={"ID":"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d","Type":"ContainerDied","Data":"90129e008a4dc89b51e60eb13c1d26e28f5c7cdce257c5589da14191ad251cb2"} Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.113995 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcb4v" event={"ID":"adcaff8e-ed88-4fa1-af55-aedc60d35481","Type":"ContainerStarted","Data":"09eaeff79843cbfc2f9ffb76f9a605c453689a058df45abd066d2424f46b5c4d"} Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.153788 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lcb4v" podStartSLOduration=2.291167096 podStartE2EDuration="10.153771924s" podCreationTimestamp="2026-01-30 13:52:24 +0000 UTC" firstStartedPulling="2026-01-30 13:52:26.057586657 +0000 UTC m=+556.758935148" lastFinishedPulling="2026-01-30 13:52:33.920191485 +0000 UTC m=+564.621539976" observedRunningTime="2026-01-30 13:52:34.148191562 +0000 UTC m=+564.849540063" watchObservedRunningTime="2026-01-30 13:52:34.153771924 +0000 UTC m=+564.855120435" Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.500641 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t5rxw" podUID="6be7bc1b-60e4-429d-b706-90063b00442e" containerName="registry-server" probeResult="failure" output=< Jan 30 13:52:34 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 13:52:34 crc kubenswrapper[4793]: > Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.872972 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.873022 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:35 crc kubenswrapper[4793]: I0130 13:52:35.126183 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67xsr" event={"ID":"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d","Type":"ContainerStarted","Data":"4eb4333cd4336b298ad678a984117026d91a1b15197428779efc1835b346a1ef"} Jan 30 13:52:35 crc kubenswrapper[4793]: I0130 13:52:35.148857 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-67xsr" podStartSLOduration=2.467095852 podStartE2EDuration="10.148838262s" podCreationTimestamp="2026-01-30 13:52:25 +0000 UTC" firstStartedPulling="2026-01-30 13:52:27.064969818 +0000 UTC m=+557.766318309" lastFinishedPulling="2026-01-30 13:52:34.746712228 +0000 UTC m=+565.448060719" observedRunningTime="2026-01-30 13:52:35.143793344 +0000 UTC m=+565.845141855" watchObservedRunningTime="2026-01-30 13:52:35.148838262 +0000 UTC m=+565.850186753" Jan 30 13:52:35 crc kubenswrapper[4793]: I0130 13:52:35.875872 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:35 crc kubenswrapper[4793]: I0130 13:52:35.875983 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:35 crc kubenswrapper[4793]: I0130 13:52:35.917593 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lcb4v" podUID="adcaff8e-ed88-4fa1-af55-aedc60d35481" containerName="registry-server" probeResult="failure" output=< Jan 30 13:52:35 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 13:52:35 crc kubenswrapper[4793]: > Jan 30 13:52:36 crc kubenswrapper[4793]: I0130 13:52:36.927910 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-67xsr" podUID="4a0cd3b8-afdf-4eb1-b818-565ce4d0647d" containerName="registry-server" probeResult="failure" output=< Jan 30 13:52:36 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 13:52:36 crc kubenswrapper[4793]: > Jan 30 13:52:42 crc kubenswrapper[4793]: I0130 13:52:42.413496 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:52:42 crc kubenswrapper[4793]: I0130 13:52:42.414075 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:52:43 crc kubenswrapper[4793]: I0130 13:52:43.503561 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:43 crc kubenswrapper[4793]: I0130 13:52:43.555415 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:44 crc kubenswrapper[4793]: I0130 13:52:44.915992 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:44 crc kubenswrapper[4793]: I0130 13:52:44.960367 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:45 crc kubenswrapper[4793]: I0130 13:52:45.913906 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:45 crc kubenswrapper[4793]: I0130 13:52:45.950773 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:53:12 crc kubenswrapper[4793]: I0130 13:53:12.414037 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:53:12 crc kubenswrapper[4793]: I0130 13:53:12.414612 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:53:42 crc kubenswrapper[4793]: I0130 13:53:42.429633 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:53:42 crc kubenswrapper[4793]: I0130 13:53:42.430434 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:53:42 crc kubenswrapper[4793]: I0130 13:53:42.445240 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:53:42 crc kubenswrapper[4793]: I0130 13:53:42.446183 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da1bd3d911e39105fb6fe0014eb41a36c6a445fb3c02ca872cc47e861a75515a"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:53:42 crc kubenswrapper[4793]: I0130 13:53:42.446387 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://da1bd3d911e39105fb6fe0014eb41a36c6a445fb3c02ca872cc47e861a75515a" gracePeriod=600 Jan 30 13:53:43 crc kubenswrapper[4793]: I0130 13:53:43.490167 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="da1bd3d911e39105fb6fe0014eb41a36c6a445fb3c02ca872cc47e861a75515a" exitCode=0 Jan 30 13:53:43 crc kubenswrapper[4793]: I0130 13:53:43.490232 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"da1bd3d911e39105fb6fe0014eb41a36c6a445fb3c02ca872cc47e861a75515a"} Jan 30 13:53:43 crc kubenswrapper[4793]: I0130 13:53:43.490566 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"b9cf45bf1a50275470b74653bea158e128b7fd786c16cf7d32b21f4133fd1baa"} Jan 30 13:53:43 crc kubenswrapper[4793]: I0130 13:53:43.490597 4793 scope.go:117] "RemoveContainer" containerID="eb80942b6e6f56f06d5a97a5c92cee45946524669b2d3f8777363114c1c78ea4" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.437727 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jbshc"] Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.439266 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.463351 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jbshc"] Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.609836 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a004a105-a29f-46a5-958e-6cf954856c97-registry-certificates\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610079 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-registry-tls\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610158 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-bound-sa-token\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610250 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5rt\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-kube-api-access-vp5rt\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610327 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a004a105-a29f-46a5-958e-6cf954856c97-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610395 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610499 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a004a105-a29f-46a5-958e-6cf954856c97-trusted-ca\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610596 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a004a105-a29f-46a5-958e-6cf954856c97-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.632143 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711587 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a004a105-a29f-46a5-958e-6cf954856c97-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711641 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a004a105-a29f-46a5-958e-6cf954856c97-trusted-ca\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a004a105-a29f-46a5-958e-6cf954856c97-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711713 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a004a105-a29f-46a5-958e-6cf954856c97-registry-certificates\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711748 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-bound-sa-token\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711767 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-registry-tls\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711826 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp5rt\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-kube-api-access-vp5rt\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.712107 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a004a105-a29f-46a5-958e-6cf954856c97-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.712951 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a004a105-a29f-46a5-958e-6cf954856c97-trusted-ca\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.713368 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a004a105-a29f-46a5-958e-6cf954856c97-registry-certificates\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.717580 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-registry-tls\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.719247 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a004a105-a29f-46a5-958e-6cf954856c97-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.733728 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp5rt\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-kube-api-access-vp5rt\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.735709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-bound-sa-token\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.810158 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.973539 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jbshc"] Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.990553 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" event={"ID":"a004a105-a29f-46a5-958e-6cf954856c97","Type":"ContainerStarted","Data":"7d3b158012fa8515ed07746109da8437d41fd316e57ace5b89c602b689f31ffa"} Jan 30 13:55:16 crc kubenswrapper[4793]: I0130 13:55:16.996308 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" event={"ID":"a004a105-a29f-46a5-958e-6cf954856c97","Type":"ContainerStarted","Data":"0ec34bbd1fa059ca7e9d8a36a858bc7600a2a06d56ef4741c5ab335490255299"} Jan 30 13:55:16 crc kubenswrapper[4793]: I0130 13:55:16.996601 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:17 crc kubenswrapper[4793]: I0130 13:55:17.013445 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" podStartSLOduration=2.01341846 podStartE2EDuration="2.01341846s" podCreationTimestamp="2026-01-30 13:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:55:17.012164589 +0000 UTC m=+727.713513120" watchObservedRunningTime="2026-01-30 13:55:17.01341846 +0000 UTC m=+727.714766981" Jan 30 13:55:35 crc kubenswrapper[4793]: I0130 13:55:35.819250 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:35 crc kubenswrapper[4793]: I0130 13:55:35.913323 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pfnjs"] Jan 30 13:56:00 crc kubenswrapper[4793]: I0130 13:56:00.987129 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" podUID="d6e18cea-cac6-4eb8-b8de-2885fcf57497" containerName="registry" containerID="cri-o://2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3" gracePeriod=30 Jan 30 13:56:01 crc kubenswrapper[4793]: I0130 13:56:01.980904 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107577 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6e18cea-cac6-4eb8-b8de-2885fcf57497-ca-trust-extracted\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107635 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-tls\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107659 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg2l5\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-kube-api-access-xg2l5\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107815 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107854 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-certificates\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6e18cea-cac6-4eb8-b8de-2885fcf57497-installation-pull-secrets\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107924 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-bound-sa-token\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107943 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-trusted-ca\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.108950 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.113439 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.113524 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.114630 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.114857 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6e18cea-cac6-4eb8-b8de-2885fcf57497-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.118635 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.120559 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-kube-api-access-xg2l5" (OuterVolumeSpecName: "kube-api-access-xg2l5") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "kube-api-access-xg2l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.124734 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6e18cea-cac6-4eb8-b8de-2885fcf57497-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209746 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209800 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209813 4793 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6e18cea-cac6-4eb8-b8de-2885fcf57497-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209827 4793 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209842 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg2l5\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-kube-api-access-xg2l5\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209856 4793 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209867 4793 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6e18cea-cac6-4eb8-b8de-2885fcf57497-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.282032 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" event={"ID":"d6e18cea-cac6-4eb8-b8de-2885fcf57497","Type":"ContainerDied","Data":"2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3"} Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.282103 4793 scope.go:117] "RemoveContainer" containerID="2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.282041 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.281977 4793 generic.go:334] "Generic (PLEG): container finished" podID="d6e18cea-cac6-4eb8-b8de-2885fcf57497" containerID="2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3" exitCode=0 Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.282297 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" event={"ID":"d6e18cea-cac6-4eb8-b8de-2885fcf57497","Type":"ContainerDied","Data":"a08f554d2033f377796937c2541b63cf2f56fd0fbab97d4b3c4a88316aa86471"} Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.303038 4793 scope.go:117] "RemoveContainer" containerID="2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3" Jan 30 13:56:02 crc kubenswrapper[4793]: E0130 13:56:02.303604 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3\": container with ID starting with 2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3 not found: ID does not exist" containerID="2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.303640 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3"} err="failed to get container status \"2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3\": rpc error: code = NotFound desc = could not find container \"2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3\": container with ID starting with 2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3 not found: ID does not exist" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.315503 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pfnjs"] Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.323830 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pfnjs"] Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.415380 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6e18cea-cac6-4eb8-b8de-2885fcf57497" path="/var/lib/kubelet/pods/d6e18cea-cac6-4eb8-b8de-2885fcf57497/volumes" Jan 30 13:56:10 crc kubenswrapper[4793]: I0130 13:56:10.909579 4793 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 13:56:12 crc kubenswrapper[4793]: I0130 13:56:12.413469 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:56:12 crc kubenswrapper[4793]: I0130 13:56:12.413764 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:56:42 crc kubenswrapper[4793]: I0130 13:56:42.413975 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:56:42 crc kubenswrapper[4793]: I0130 13:56:42.414854 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.414214 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.414942 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.415020 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.416000 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9cf45bf1a50275470b74653bea158e128b7fd786c16cf7d32b21f4133fd1baa"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.416127 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://b9cf45bf1a50275470b74653bea158e128b7fd786c16cf7d32b21f4133fd1baa" gracePeriod=600 Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.663774 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="b9cf45bf1a50275470b74653bea158e128b7fd786c16cf7d32b21f4133fd1baa" exitCode=0 Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.663814 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"b9cf45bf1a50275470b74653bea158e128b7fd786c16cf7d32b21f4133fd1baa"} Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.663864 4793 scope.go:117] "RemoveContainer" containerID="da1bd3d911e39105fb6fe0014eb41a36c6a445fb3c02ca872cc47e861a75515a" Jan 30 13:57:13 crc kubenswrapper[4793]: I0130 13:57:13.671923 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"a70290c8d43e76215d2545599390db044bcef74601c3ab38a37df4fc1393ebad"} Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.664917 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq"] Jan 30 13:58:47 crc kubenswrapper[4793]: E0130 13:58:47.665709 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6e18cea-cac6-4eb8-b8de-2885fcf57497" containerName="registry" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.665726 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6e18cea-cac6-4eb8-b8de-2885fcf57497" containerName="registry" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.665855 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6e18cea-cac6-4eb8-b8de-2885fcf57497" containerName="registry" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.666349 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.669767 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.674445 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-fpdzl" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.674492 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.674956 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq"] Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.682320 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-26t5l"] Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.683121 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-26t5l" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.690541 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zbvxs" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.705367 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-26t5l"] Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.713007 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lm7l8"] Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.713802 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.718578 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-gjfks" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.735955 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lm7l8"] Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.858813 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7w9r\" (UniqueName: \"kubernetes.io/projected/1b680507-f432-4019-b372-d9452d89aa97-kube-api-access-n7w9r\") pod \"cert-manager-858654f9db-26t5l\" (UID: \"1b680507-f432-4019-b372-d9452d89aa97\") " pod="cert-manager/cert-manager-858654f9db-26t5l" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.858874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td5z7\" (UniqueName: \"kubernetes.io/projected/8fd78cec-1c0f-427e-8224-4021da0ede3c-kube-api-access-td5z7\") pod \"cert-manager-cainjector-cf98fcc89-tzjhq\" (UID: \"8fd78cec-1c0f-427e-8224-4021da0ede3c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.858987 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkk56\" (UniqueName: \"kubernetes.io/projected/e88efb4a-1489-4847-adb4-230a8b5db6ef-kube-api-access-mkk56\") pod \"cert-manager-webhook-687f57d79b-lm7l8\" (UID: \"e88efb4a-1489-4847-adb4-230a8b5db6ef\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.960212 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7w9r\" (UniqueName: \"kubernetes.io/projected/1b680507-f432-4019-b372-d9452d89aa97-kube-api-access-n7w9r\") pod \"cert-manager-858654f9db-26t5l\" (UID: \"1b680507-f432-4019-b372-d9452d89aa97\") " pod="cert-manager/cert-manager-858654f9db-26t5l" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.960286 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td5z7\" (UniqueName: \"kubernetes.io/projected/8fd78cec-1c0f-427e-8224-4021da0ede3c-kube-api-access-td5z7\") pod \"cert-manager-cainjector-cf98fcc89-tzjhq\" (UID: \"8fd78cec-1c0f-427e-8224-4021da0ede3c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.960341 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkk56\" (UniqueName: \"kubernetes.io/projected/e88efb4a-1489-4847-adb4-230a8b5db6ef-kube-api-access-mkk56\") pod \"cert-manager-webhook-687f57d79b-lm7l8\" (UID: \"e88efb4a-1489-4847-adb4-230a8b5db6ef\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.987998 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7w9r\" (UniqueName: \"kubernetes.io/projected/1b680507-f432-4019-b372-d9452d89aa97-kube-api-access-n7w9r\") pod \"cert-manager-858654f9db-26t5l\" (UID: \"1b680507-f432-4019-b372-d9452d89aa97\") " pod="cert-manager/cert-manager-858654f9db-26t5l" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.988473 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td5z7\" (UniqueName: \"kubernetes.io/projected/8fd78cec-1c0f-427e-8224-4021da0ede3c-kube-api-access-td5z7\") pod \"cert-manager-cainjector-cf98fcc89-tzjhq\" (UID: \"8fd78cec-1c0f-427e-8224-4021da0ede3c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.995498 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-26t5l" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.995510 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkk56\" (UniqueName: \"kubernetes.io/projected/e88efb4a-1489-4847-adb4-230a8b5db6ef-kube-api-access-mkk56\") pod \"cert-manager-webhook-687f57d79b-lm7l8\" (UID: \"e88efb4a-1489-4847-adb4-230a8b5db6ef\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:48 crc kubenswrapper[4793]: I0130 13:58:48.027221 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:48 crc kubenswrapper[4793]: I0130 13:58:48.261063 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-26t5l"] Jan 30 13:58:48 crc kubenswrapper[4793]: I0130 13:58:48.272418 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:58:48 crc kubenswrapper[4793]: I0130 13:58:48.284963 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" Jan 30 13:58:48 crc kubenswrapper[4793]: I0130 13:58:48.328078 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lm7l8"] Jan 30 13:58:48 crc kubenswrapper[4793]: W0130 13:58:48.332776 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode88efb4a_1489_4847_adb4_230a8b5db6ef.slice/crio-84f73cb258d3a393a10be90b1c927c58afe345979336c8a7a3b8934bc7a2d7ce WatchSource:0}: Error finding container 84f73cb258d3a393a10be90b1c927c58afe345979336c8a7a3b8934bc7a2d7ce: Status 404 returned error can't find the container with id 84f73cb258d3a393a10be90b1c927c58afe345979336c8a7a3b8934bc7a2d7ce Jan 30 13:58:48 crc kubenswrapper[4793]: I0130 13:58:48.500722 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq"] Jan 30 13:58:48 crc kubenswrapper[4793]: W0130 13:58:48.506770 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fd78cec_1c0f_427e_8224_4021da0ede3c.slice/crio-6f38a24bf997beecffec529a8546352b6443ec28ab341e7a7f061b606f098073 WatchSource:0}: Error finding container 6f38a24bf997beecffec529a8546352b6443ec28ab341e7a7f061b606f098073: Status 404 returned error can't find the container with id 6f38a24bf997beecffec529a8546352b6443ec28ab341e7a7f061b606f098073 Jan 30 13:58:49 crc kubenswrapper[4793]: I0130 13:58:49.195266 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" event={"ID":"e88efb4a-1489-4847-adb4-230a8b5db6ef","Type":"ContainerStarted","Data":"84f73cb258d3a393a10be90b1c927c58afe345979336c8a7a3b8934bc7a2d7ce"} Jan 30 13:58:49 crc kubenswrapper[4793]: I0130 13:58:49.196297 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-26t5l" event={"ID":"1b680507-f432-4019-b372-d9452d89aa97","Type":"ContainerStarted","Data":"aaa9e1d83c48611449eb72b512d4f2064d9ba3b681f58004fac199eadcf79da5"} Jan 30 13:58:49 crc kubenswrapper[4793]: I0130 13:58:49.197172 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" event={"ID":"8fd78cec-1c0f-427e-8224-4021da0ede3c","Type":"ContainerStarted","Data":"6f38a24bf997beecffec529a8546352b6443ec28ab341e7a7f061b606f098073"} Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.238277 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-26t5l" event={"ID":"1b680507-f432-4019-b372-d9452d89aa97","Type":"ContainerStarted","Data":"511706c2bbf825dd020c10e34d24be89772a8fc4cfdd2fe7554e1064cb56e985"} Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.240574 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" event={"ID":"8fd78cec-1c0f-427e-8224-4021da0ede3c","Type":"ContainerStarted","Data":"51acd0d3e2d331a29cc7f93cde35c33ee2f096c038936babc4e402b2afe7ac70"} Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.244406 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" event={"ID":"e88efb4a-1489-4847-adb4-230a8b5db6ef","Type":"ContainerStarted","Data":"397ae737299c48e7407c819cec40d16557ad4ced52e09be6fb4b85c45b12a802"} Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.245082 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.251667 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-26t5l" podStartSLOduration=1.895034172 podStartE2EDuration="7.251647163s" podCreationTimestamp="2026-01-30 13:58:47 +0000 UTC" firstStartedPulling="2026-01-30 13:58:48.272175302 +0000 UTC m=+938.973523793" lastFinishedPulling="2026-01-30 13:58:53.628788293 +0000 UTC m=+944.330136784" observedRunningTime="2026-01-30 13:58:54.24990043 +0000 UTC m=+944.951248931" watchObservedRunningTime="2026-01-30 13:58:54.251647163 +0000 UTC m=+944.952995654" Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.268711 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" podStartSLOduration=2.005806527 podStartE2EDuration="7.26868555s" podCreationTimestamp="2026-01-30 13:58:47 +0000 UTC" firstStartedPulling="2026-01-30 13:58:48.336431027 +0000 UTC m=+939.037779518" lastFinishedPulling="2026-01-30 13:58:53.59931005 +0000 UTC m=+944.300658541" observedRunningTime="2026-01-30 13:58:54.267508291 +0000 UTC m=+944.968856782" watchObservedRunningTime="2026-01-30 13:58:54.26868555 +0000 UTC m=+944.970034041" Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.298627 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" podStartSLOduration=2.193016778 podStartE2EDuration="7.298607714s" podCreationTimestamp="2026-01-30 13:58:47 +0000 UTC" firstStartedPulling="2026-01-30 13:58:48.509031949 +0000 UTC m=+939.210380430" lastFinishedPulling="2026-01-30 13:58:53.614622875 +0000 UTC m=+944.315971366" observedRunningTime="2026-01-30 13:58:54.297530068 +0000 UTC m=+944.998878569" watchObservedRunningTime="2026-01-30 13:58:54.298607714 +0000 UTC m=+944.999956215" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.253106 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g62p5"] Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.254421 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-controller" containerID="cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.255102 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="sbdb" containerID="cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.255206 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="nbdb" containerID="cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.255276 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="northd" containerID="cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.255309 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-node" containerID="cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.255427 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-acl-logging" containerID="cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.255441 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.331571 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" containerID="cri-o://970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.608352 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/3.log" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.610856 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovn-acl-logging/0.log" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.611469 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovn-controller/0.log" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.612015 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673318 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2kfl2"] Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673579 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kubecfg-setup" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673601 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kubecfg-setup" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673615 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673623 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673630 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="nbdb" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673638 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="nbdb" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673650 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673657 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673666 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="northd" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673673 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="northd" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673683 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673691 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673699 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="sbdb" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673708 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="sbdb" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673721 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-node" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673728 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-node" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673740 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-acl-logging" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673747 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-acl-logging" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673759 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673766 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673778 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673785 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673799 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673806 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673919 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-acl-logging" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673929 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="nbdb" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673941 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673949 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673958 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673969 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673980 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="sbdb" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673993 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.674001 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="northd" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.674014 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-node" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.674297 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.674308 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.674395 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.674589 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.675868 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.703611 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-slash\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.703857 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-script-lib\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.703978 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-openvswitch\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.704092 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-systemd\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.704413 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-var-lib-openvswitch\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.704540 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-netd\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.704648 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-etc-openvswitch\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.704756 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-netns\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.704876 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8km7w\" (UniqueName: \"kubernetes.io/projected/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-kube-api-access-8km7w\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705057 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-log-socket\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705154 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-ovn\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705354 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-systemd-units\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705474 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovn-node-metrics-cert\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705586 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705640 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-slash" (OuterVolumeSpecName: "host-slash") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705749 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-kubelet\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705884 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-ovn-kubernetes\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706000 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-env-overrides\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706126 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-node-log\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706244 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-bin\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706351 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-config\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706474 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706909 4793 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707760 4793 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706069 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706091 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706513 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706969 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706996 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707032 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707063 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707095 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-node-log" (OuterVolumeSpecName: "node-log") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707263 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-log-socket" (OuterVolumeSpecName: "log-socket") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707282 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707292 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707321 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707708 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.711870 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.711970 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-kube-api-access-8km7w" (OuterVolumeSpecName: "kube-api-access-8km7w") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "kube-api-access-8km7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.719240 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.808968 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-cni-netd\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809012 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-node-log\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809099 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-etc-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809118 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-systemd\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809182 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-ovnkube-script-lib\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809220 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-log-socket\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809244 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7796f\" (UniqueName: \"kubernetes.io/projected/342be2df-69a2-48ac-bad1-4445129ba471-kube-api-access-7796f\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809271 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-kubelet\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809313 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809350 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-cni-bin\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809368 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-run-netns\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809387 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-ovnkube-config\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809409 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-var-lib-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809443 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-ovn\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809525 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-systemd-units\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809597 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-env-overrides\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809623 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-slash\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809650 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/342be2df-69a2-48ac-bad1-4445129ba471-ovn-node-metrics-cert\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809671 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-run-ovn-kubernetes\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809696 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809740 4793 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809752 4793 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809762 4793 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809773 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809784 4793 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809794 4793 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809803 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809813 4793 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809822 4793 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809832 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809842 4793 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809853 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809863 4793 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809871 4793 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809881 4793 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809891 4793 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809902 4793 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809911 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8km7w\" (UniqueName: \"kubernetes.io/projected/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-kube-api-access-8km7w\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911397 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-ovnkube-config\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911435 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-var-lib-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911454 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-ovn\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911473 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-systemd-units\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911494 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-env-overrides\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911509 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-slash\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911526 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/342be2df-69a2-48ac-bad1-4445129ba471-ovn-node-metrics-cert\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911540 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-run-ovn-kubernetes\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911563 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911579 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-node-log\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-cni-netd\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911618 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-etc-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911636 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-systemd\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911654 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-ovnkube-script-lib\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911666 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-log-socket\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911680 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7796f\" (UniqueName: \"kubernetes.io/projected/342be2df-69a2-48ac-bad1-4445129ba471-kube-api-access-7796f\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911694 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-kubelet\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911728 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-cni-bin\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911749 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-run-netns\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911805 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-run-netns\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912349 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-node-log\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912428 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-cni-netd\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912439 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-kubelet\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912401 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-log-socket\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912480 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-run-ovn-kubernetes\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912507 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-systemd\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912488 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-etc-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912656 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-ovnkube-config\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912701 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912733 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-cni-bin\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912763 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912771 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-ovn\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912776 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-env-overrides\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912795 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-systemd-units\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-var-lib-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.913062 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-slash\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.913199 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-ovnkube-script-lib\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.916507 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/342be2df-69a2-48ac-bad1-4445129ba471-ovn-node-metrics-cert\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.927001 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7796f\" (UniqueName: \"kubernetes.io/projected/342be2df-69a2-48ac-bad1-4445129ba471-kube-api-access-7796f\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.988831 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:58 crc kubenswrapper[4793]: W0130 13:58:58.004791 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod342be2df_69a2_48ac_bad1_4445129ba471.slice/crio-387b33408865d60d5a4774ab0317e125f6eb2a216ce7a7e37e120be573a1a3f7 WatchSource:0}: Error finding container 387b33408865d60d5a4774ab0317e125f6eb2a216ce7a7e37e120be573a1a3f7: Status 404 returned error can't find the container with id 387b33408865d60d5a4774ab0317e125f6eb2a216ce7a7e37e120be573a1a3f7 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.031990 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.280200 4793 generic.go:334] "Generic (PLEG): container finished" podID="342be2df-69a2-48ac-bad1-4445129ba471" containerID="88b8e73ada383f6ab1bbf6341550ed0c3856aadbb0adf3493033cfe1f554513d" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.280298 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerDied","Data":"88b8e73ada383f6ab1bbf6341550ed0c3856aadbb0adf3493033cfe1f554513d"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.280334 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"387b33408865d60d5a4774ab0317e125f6eb2a216ce7a7e37e120be573a1a3f7"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.283030 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/3.log" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.285740 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovn-acl-logging/0.log" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286326 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovn-controller/0.log" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286711 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286747 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286756 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286767 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286775 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286783 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286791 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" exitCode=143 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286800 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" exitCode=143 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286848 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286879 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286895 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286921 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286935 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286947 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286960 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286967 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286974 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286980 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286986 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286992 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286998 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287005 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287013 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287025 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287032 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287039 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287076 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287083 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287091 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287097 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287105 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287112 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287119 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287129 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287142 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287150 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287157 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287164 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287171 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287177 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287183 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287189 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287198 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287204 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287214 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"483688d83c9fd52a9c7106da5a4bf9f5c29a0ecb4d0a52164165da4e2be17cc3"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287224 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287233 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287241 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287247 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287254 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287260 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287266 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287272 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287278 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287285 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287164 4793 scope.go:117] "RemoveContainer" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287150 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.293976 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/2.log" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.294452 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/1.log" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.294504 4793 generic.go:334] "Generic (PLEG): container finished" podID="3e8d16db-eb58-4895-8c24-47d6f12b1ea4" containerID="bfdf4f4d87575310b5571ad8d96eada9a0f6637ad77b4d2c2367210b2d703abd" exitCode=2 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.294529 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerDied","Data":"bfdf4f4d87575310b5571ad8d96eada9a0f6637ad77b4d2c2367210b2d703abd"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.294577 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.294984 4793 scope.go:117] "RemoveContainer" containerID="bfdf4f4d87575310b5571ad8d96eada9a0f6637ad77b4d2c2367210b2d703abd" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.340763 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.372564 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g62p5"] Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.376238 4793 scope.go:117] "RemoveContainer" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.396078 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g62p5"] Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.406583 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" path="/var/lib/kubelet/pods/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/volumes" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.407231 4793 scope.go:117] "RemoveContainer" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.438032 4793 scope.go:117] "RemoveContainer" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.455516 4793 scope.go:117] "RemoveContainer" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.478979 4793 scope.go:117] "RemoveContainer" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.504422 4793 scope.go:117] "RemoveContainer" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.531633 4793 scope.go:117] "RemoveContainer" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.549573 4793 scope.go:117] "RemoveContainer" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.562323 4793 scope.go:117] "RemoveContainer" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.562614 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": container with ID starting with 970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26 not found: ID does not exist" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.562649 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} err="failed to get container status \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": rpc error: code = NotFound desc = could not find container \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": container with ID starting with 970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.562671 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.563007 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": container with ID starting with e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a not found: ID does not exist" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563137 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} err="failed to get container status \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": rpc error: code = NotFound desc = could not find container \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": container with ID starting with e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563159 4793 scope.go:117] "RemoveContainer" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.563390 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": container with ID starting with 1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4 not found: ID does not exist" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563419 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} err="failed to get container status \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": rpc error: code = NotFound desc = could not find container \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": container with ID starting with 1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563439 4793 scope.go:117] "RemoveContainer" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.563604 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": container with ID starting with 34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0 not found: ID does not exist" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563632 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} err="failed to get container status \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": rpc error: code = NotFound desc = could not find container \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": container with ID starting with 34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563649 4793 scope.go:117] "RemoveContainer" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.563882 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": container with ID starting with 7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320 not found: ID does not exist" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563925 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} err="failed to get container status \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": rpc error: code = NotFound desc = could not find container \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": container with ID starting with 7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563944 4793 scope.go:117] "RemoveContainer" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.564302 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": container with ID starting with ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32 not found: ID does not exist" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.564324 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} err="failed to get container status \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": rpc error: code = NotFound desc = could not find container \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": container with ID starting with ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.564361 4793 scope.go:117] "RemoveContainer" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.564552 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": container with ID starting with 3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05 not found: ID does not exist" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.564574 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} err="failed to get container status \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": rpc error: code = NotFound desc = could not find container \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": container with ID starting with 3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.564590 4793 scope.go:117] "RemoveContainer" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.564769 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": container with ID starting with 8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6 not found: ID does not exist" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.564790 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} err="failed to get container status \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": rpc error: code = NotFound desc = could not find container \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": container with ID starting with 8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.564807 4793 scope.go:117] "RemoveContainer" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.565016 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": container with ID starting with cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071 not found: ID does not exist" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.565055 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} err="failed to get container status \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": rpc error: code = NotFound desc = could not find container \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": container with ID starting with cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.565073 4793 scope.go:117] "RemoveContainer" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.565426 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": container with ID starting with 1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9 not found: ID does not exist" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.565484 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} err="failed to get container status \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": rpc error: code = NotFound desc = could not find container \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": container with ID starting with 1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.565502 4793 scope.go:117] "RemoveContainer" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.565738 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} err="failed to get container status \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": rpc error: code = NotFound desc = could not find container \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": container with ID starting with 970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.565762 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.566093 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} err="failed to get container status \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": rpc error: code = NotFound desc = could not find container \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": container with ID starting with e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.566119 4793 scope.go:117] "RemoveContainer" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.566459 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} err="failed to get container status \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": rpc error: code = NotFound desc = could not find container \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": container with ID starting with 1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.566500 4793 scope.go:117] "RemoveContainer" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.566792 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} err="failed to get container status \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": rpc error: code = NotFound desc = could not find container \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": container with ID starting with 34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.566811 4793 scope.go:117] "RemoveContainer" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.567138 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} err="failed to get container status \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": rpc error: code = NotFound desc = could not find container \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": container with ID starting with 7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.567159 4793 scope.go:117] "RemoveContainer" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.567493 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} err="failed to get container status \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": rpc error: code = NotFound desc = could not find container \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": container with ID starting with ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.567535 4793 scope.go:117] "RemoveContainer" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.567801 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} err="failed to get container status \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": rpc error: code = NotFound desc = could not find container \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": container with ID starting with 3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.567825 4793 scope.go:117] "RemoveContainer" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.568143 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} err="failed to get container status \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": rpc error: code = NotFound desc = could not find container \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": container with ID starting with 8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.568166 4793 scope.go:117] "RemoveContainer" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.568493 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} err="failed to get container status \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": rpc error: code = NotFound desc = could not find container \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": container with ID starting with cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.568517 4793 scope.go:117] "RemoveContainer" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.568761 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} err="failed to get container status \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": rpc error: code = NotFound desc = could not find container \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": container with ID starting with 1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.568784 4793 scope.go:117] "RemoveContainer" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.569285 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} err="failed to get container status \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": rpc error: code = NotFound desc = could not find container \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": container with ID starting with 970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.569326 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.569673 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} err="failed to get container status \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": rpc error: code = NotFound desc = could not find container \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": container with ID starting with e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.569731 4793 scope.go:117] "RemoveContainer" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.570072 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} err="failed to get container status \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": rpc error: code = NotFound desc = could not find container \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": container with ID starting with 1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.570094 4793 scope.go:117] "RemoveContainer" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.570659 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} err="failed to get container status \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": rpc error: code = NotFound desc = could not find container \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": container with ID starting with 34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.570692 4793 scope.go:117] "RemoveContainer" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.571031 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} err="failed to get container status \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": rpc error: code = NotFound desc = could not find container \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": container with ID starting with 7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.571081 4793 scope.go:117] "RemoveContainer" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.571451 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} err="failed to get container status \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": rpc error: code = NotFound desc = could not find container \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": container with ID starting with ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.571478 4793 scope.go:117] "RemoveContainer" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.572333 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} err="failed to get container status \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": rpc error: code = NotFound desc = could not find container \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": container with ID starting with 3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.572378 4793 scope.go:117] "RemoveContainer" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.572604 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} err="failed to get container status \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": rpc error: code = NotFound desc = could not find container \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": container with ID starting with 8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.572632 4793 scope.go:117] "RemoveContainer" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.572995 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} err="failed to get container status \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": rpc error: code = NotFound desc = could not find container \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": container with ID starting with cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.573021 4793 scope.go:117] "RemoveContainer" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.573332 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} err="failed to get container status \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": rpc error: code = NotFound desc = could not find container \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": container with ID starting with 1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.573364 4793 scope.go:117] "RemoveContainer" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.573635 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} err="failed to get container status \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": rpc error: code = NotFound desc = could not find container \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": container with ID starting with 970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.573681 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.573995 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} err="failed to get container status \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": rpc error: code = NotFound desc = could not find container \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": container with ID starting with e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574018 4793 scope.go:117] "RemoveContainer" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574300 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} err="failed to get container status \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": rpc error: code = NotFound desc = could not find container \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": container with ID starting with 1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574323 4793 scope.go:117] "RemoveContainer" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574668 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} err="failed to get container status \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": rpc error: code = NotFound desc = could not find container \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": container with ID starting with 34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574692 4793 scope.go:117] "RemoveContainer" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574926 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} err="failed to get container status \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": rpc error: code = NotFound desc = could not find container \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": container with ID starting with 7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574946 4793 scope.go:117] "RemoveContainer" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.575193 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} err="failed to get container status \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": rpc error: code = NotFound desc = could not find container \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": container with ID starting with ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.575222 4793 scope.go:117] "RemoveContainer" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.575512 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} err="failed to get container status \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": rpc error: code = NotFound desc = could not find container \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": container with ID starting with 3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.575537 4793 scope.go:117] "RemoveContainer" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.575830 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} err="failed to get container status \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": rpc error: code = NotFound desc = could not find container \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": container with ID starting with 8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.575847 4793 scope.go:117] "RemoveContainer" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.576106 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} err="failed to get container status \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": rpc error: code = NotFound desc = could not find container \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": container with ID starting with cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.576137 4793 scope.go:117] "RemoveContainer" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.576502 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} err="failed to get container status \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": rpc error: code = NotFound desc = could not find container \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": container with ID starting with 1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.576527 4793 scope.go:117] "RemoveContainer" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.576859 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} err="failed to get container status \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": rpc error: code = NotFound desc = could not find container \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": container with ID starting with 970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26 not found: ID does not exist" Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.301566 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"5d3ec634afc2a467df35090317e765b9461be46d730b25ac7a328d44f8900b8c"} Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.301869 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"3bf7a64a244e9e2cdf6016ed5599bb41e04a892904664e45a2d378e93dc7f6ff"} Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.301881 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"b6c86fa688a85fe9bd9556d6a64bc540b9e93f0598b1b67f8c975082772a5d3f"} Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.301889 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"8510d09077213d1f2a66660ce2daa05063f18c25e78618889de101f314313091"} Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.301898 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"1619c1af114c7b01dbbb7c9436c129d7eedfc249de446b630c05cd560373ae40"} Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.301907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"26718b1a742de84bb59e907064af9f6254b9b92d21fc639e1b0a80d157b3edfe"} Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.305224 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/2.log" Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.305700 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/1.log" Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.305743 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerStarted","Data":"27bd2894001dfffb134c2b97e60040970b8d244763407764387fc4dc4ce9b94e"} Jan 30 13:59:01 crc kubenswrapper[4793]: I0130 13:59:01.320120 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"494ffb56d5753c45465eff5c0a4d4afad318fe1bd2db9b535c17b111d4564272"} Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.340688 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"34e99456281896b91f5998355a384b746fd5232666549694dca3c3e1848c2b28"} Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.341267 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.341365 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.341440 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.377852 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.406108 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.420444 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" podStartSLOduration=7.420427886 podStartE2EDuration="7.420427886s" podCreationTimestamp="2026-01-30 13:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:59:04.386115854 +0000 UTC m=+955.087464365" watchObservedRunningTime="2026-01-30 13:59:04.420427886 +0000 UTC m=+955.121776367" Jan 30 13:59:12 crc kubenswrapper[4793]: I0130 13:59:12.413883 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:59:12 crc kubenswrapper[4793]: I0130 13:59:12.415503 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:59:13 crc kubenswrapper[4793]: I0130 13:59:13.807872 4793 scope.go:117] "RemoveContainer" containerID="95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d" Jan 30 13:59:14 crc kubenswrapper[4793]: I0130 13:59:14.414603 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/2.log" Jan 30 13:59:28 crc kubenswrapper[4793]: I0130 13:59:28.011943 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.136674 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4"] Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.138140 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.139692 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.147626 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4"] Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.186001 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.186205 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whnkk\" (UniqueName: \"kubernetes.io/projected/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-kube-api-access-whnkk\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.186250 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.287846 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whnkk\" (UniqueName: \"kubernetes.io/projected/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-kube-api-access-whnkk\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.287914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.287963 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.288574 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.288764 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.307126 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whnkk\" (UniqueName: \"kubernetes.io/projected/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-kube-api-access-whnkk\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.459416 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.676581 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4"] Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.971498 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" event={"ID":"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120","Type":"ContainerStarted","Data":"8e24f9b9bcb471ebd0938aeaf2a15d649b9ee08f57f2f1fa3db1889d608b6208"} Jan 30 13:59:41 crc kubenswrapper[4793]: I0130 13:59:41.981371 4793 generic.go:334] "Generic (PLEG): container finished" podID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerID="1e233a4b22b25c43b3c6a8e65ce89f7e9846533b834f32e059dfe4cdb44551b5" exitCode=0 Jan 30 13:59:41 crc kubenswrapper[4793]: I0130 13:59:41.981463 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" event={"ID":"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120","Type":"ContainerDied","Data":"1e233a4b22b25c43b3c6a8e65ce89f7e9846533b834f32e059dfe4cdb44551b5"} Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.369861 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r9xlp"] Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.371075 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.387497 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9xlp"] Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.419221 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.419281 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.429260 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-catalog-content\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.429566 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-utilities\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.429804 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlxh\" (UniqueName: \"kubernetes.io/projected/8c59ec83-7715-4a59-a31b-b433cc9d77a7-kube-api-access-knlxh\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.531205 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-catalog-content\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.531333 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-utilities\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.531361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knlxh\" (UniqueName: \"kubernetes.io/projected/8c59ec83-7715-4a59-a31b-b433cc9d77a7-kube-api-access-knlxh\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.531642 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-catalog-content\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.531876 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-utilities\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.550376 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knlxh\" (UniqueName: \"kubernetes.io/projected/8c59ec83-7715-4a59-a31b-b433cc9d77a7-kube-api-access-knlxh\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.690269 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:43 crc kubenswrapper[4793]: I0130 13:59:43.096908 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9xlp"] Jan 30 13:59:44 crc kubenswrapper[4793]: I0130 13:59:44.007651 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" event={"ID":"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120","Type":"ContainerStarted","Data":"1e7de3555e80880d54038395ae121bedcc6c5978b8ce7b6a1757b99f65006ac4"} Jan 30 13:59:44 crc kubenswrapper[4793]: I0130 13:59:44.010485 4793 generic.go:334] "Generic (PLEG): container finished" podID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerID="dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30" exitCode=0 Jan 30 13:59:44 crc kubenswrapper[4793]: I0130 13:59:44.010566 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerDied","Data":"dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30"} Jan 30 13:59:44 crc kubenswrapper[4793]: I0130 13:59:44.010616 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerStarted","Data":"a7f6cd11bf61597471d4b3cc7d761e75ee9fbc7009499720876fb6770586f0a7"} Jan 30 13:59:45 crc kubenswrapper[4793]: I0130 13:59:45.020432 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerStarted","Data":"aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514"} Jan 30 13:59:45 crc kubenswrapper[4793]: I0130 13:59:45.023756 4793 generic.go:334] "Generic (PLEG): container finished" podID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerID="1e7de3555e80880d54038395ae121bedcc6c5978b8ce7b6a1757b99f65006ac4" exitCode=0 Jan 30 13:59:45 crc kubenswrapper[4793]: I0130 13:59:45.023824 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" event={"ID":"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120","Type":"ContainerDied","Data":"1e7de3555e80880d54038395ae121bedcc6c5978b8ce7b6a1757b99f65006ac4"} Jan 30 13:59:46 crc kubenswrapper[4793]: I0130 13:59:46.030826 4793 generic.go:334] "Generic (PLEG): container finished" podID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerID="aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514" exitCode=0 Jan 30 13:59:46 crc kubenswrapper[4793]: I0130 13:59:46.031519 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerDied","Data":"aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514"} Jan 30 13:59:46 crc kubenswrapper[4793]: I0130 13:59:46.036306 4793 generic.go:334] "Generic (PLEG): container finished" podID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerID="bbb74e9c49a1cef752d2f80736e9c9e81375ecf59d8924bcc95c24115e7559d7" exitCode=0 Jan 30 13:59:46 crc kubenswrapper[4793]: I0130 13:59:46.036333 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" event={"ID":"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120","Type":"ContainerDied","Data":"bbb74e9c49a1cef752d2f80736e9c9e81375ecf59d8924bcc95c24115e7559d7"} Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.308517 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.398031 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whnkk\" (UniqueName: \"kubernetes.io/projected/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-kube-api-access-whnkk\") pod \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.398095 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-bundle\") pod \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.398785 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-bundle" (OuterVolumeSpecName: "bundle") pod "cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" (UID: "cd0e9042-d9db-4b5e-98b9-31ab2b3c4120"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.403111 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-kube-api-access-whnkk" (OuterVolumeSpecName: "kube-api-access-whnkk") pod "cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" (UID: "cd0e9042-d9db-4b5e-98b9-31ab2b3c4120"). InnerVolumeSpecName "kube-api-access-whnkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.498914 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-util\") pod \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.499654 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whnkk\" (UniqueName: \"kubernetes.io/projected/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-kube-api-access-whnkk\") on node \"crc\" DevicePath \"\"" Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.499687 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.509664 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-util" (OuterVolumeSpecName: "util") pod "cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" (UID: "cd0e9042-d9db-4b5e-98b9-31ab2b3c4120"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.601688 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-util\") on node \"crc\" DevicePath \"\"" Jan 30 13:59:48 crc kubenswrapper[4793]: I0130 13:59:48.060631 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerStarted","Data":"344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31"} Jan 30 13:59:48 crc kubenswrapper[4793]: I0130 13:59:48.063230 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" event={"ID":"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120","Type":"ContainerDied","Data":"8e24f9b9bcb471ebd0938aeaf2a15d649b9ee08f57f2f1fa3db1889d608b6208"} Jan 30 13:59:48 crc kubenswrapper[4793]: I0130 13:59:48.063293 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e24f9b9bcb471ebd0938aeaf2a15d649b9ee08f57f2f1fa3db1889d608b6208" Jan 30 13:59:48 crc kubenswrapper[4793]: I0130 13:59:48.063301 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:48 crc kubenswrapper[4793]: I0130 13:59:48.079692 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r9xlp" podStartSLOduration=2.566484571 podStartE2EDuration="6.079676199s" podCreationTimestamp="2026-01-30 13:59:42 +0000 UTC" firstStartedPulling="2026-01-30 13:59:44.012817947 +0000 UTC m=+994.714166468" lastFinishedPulling="2026-01-30 13:59:47.526009595 +0000 UTC m=+998.227358096" observedRunningTime="2026-01-30 13:59:48.076953442 +0000 UTC m=+998.778301953" watchObservedRunningTime="2026-01-30 13:59:48.079676199 +0000 UTC m=+998.781024700" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.601234 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9bsps"] Jan 30 13:59:51 crc kubenswrapper[4793]: E0130 13:59:51.601671 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="extract" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.601683 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="extract" Jan 30 13:59:51 crc kubenswrapper[4793]: E0130 13:59:51.601701 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="util" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.601707 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="util" Jan 30 13:59:51 crc kubenswrapper[4793]: E0130 13:59:51.601719 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="pull" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.601726 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="pull" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.601859 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="extract" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.602279 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.605784 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.607693 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.608541 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-96p7k" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.622229 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9bsps"] Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.662325 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz5jw\" (UniqueName: \"kubernetes.io/projected/1f691ecb-c128-4332-a7ab-c4e173490f50-kube-api-access-fz5jw\") pod \"nmstate-operator-646758c888-9bsps\" (UID: \"1f691ecb-c128-4332-a7ab-c4e173490f50\") " pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.763330 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz5jw\" (UniqueName: \"kubernetes.io/projected/1f691ecb-c128-4332-a7ab-c4e173490f50-kube-api-access-fz5jw\") pod \"nmstate-operator-646758c888-9bsps\" (UID: \"1f691ecb-c128-4332-a7ab-c4e173490f50\") " pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.782025 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz5jw\" (UniqueName: \"kubernetes.io/projected/1f691ecb-c128-4332-a7ab-c4e173490f50-kube-api-access-fz5jw\") pod \"nmstate-operator-646758c888-9bsps\" (UID: \"1f691ecb-c128-4332-a7ab-c4e173490f50\") " pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.916445 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" Jan 30 13:59:52 crc kubenswrapper[4793]: I0130 13:59:52.361940 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9bsps"] Jan 30 13:59:52 crc kubenswrapper[4793]: W0130 13:59:52.365236 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f691ecb_c128_4332_a7ab_c4e173490f50.slice/crio-e1bd2114db54a2f53196fb4a4b9be3df523085b8f476ac97db9de3580c6d3a42 WatchSource:0}: Error finding container e1bd2114db54a2f53196fb4a4b9be3df523085b8f476ac97db9de3580c6d3a42: Status 404 returned error can't find the container with id e1bd2114db54a2f53196fb4a4b9be3df523085b8f476ac97db9de3580c6d3a42 Jan 30 13:59:52 crc kubenswrapper[4793]: I0130 13:59:52.690797 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:52 crc kubenswrapper[4793]: I0130 13:59:52.691139 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:53 crc kubenswrapper[4793]: I0130 13:59:53.089035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" event={"ID":"1f691ecb-c128-4332-a7ab-c4e173490f50","Type":"ContainerStarted","Data":"e1bd2114db54a2f53196fb4a4b9be3df523085b8f476ac97db9de3580c6d3a42"} Jan 30 13:59:53 crc kubenswrapper[4793]: I0130 13:59:53.728466 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r9xlp" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="registry-server" probeResult="failure" output=< Jan 30 13:59:53 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 13:59:53 crc kubenswrapper[4793]: > Jan 30 13:59:55 crc kubenswrapper[4793]: I0130 13:59:55.100673 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" event={"ID":"1f691ecb-c128-4332-a7ab-c4e173490f50","Type":"ContainerStarted","Data":"51b24ad2dfba71f19e3fb756dfd4769fa3df27dbc9f3d17aa8e7d977a5cd78c0"} Jan 30 13:59:55 crc kubenswrapper[4793]: I0130 13:59:55.127120 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" podStartSLOduration=2.000234449 podStartE2EDuration="4.12708629s" podCreationTimestamp="2026-01-30 13:59:51 +0000 UTC" firstStartedPulling="2026-01-30 13:59:52.366444831 +0000 UTC m=+1003.067793322" lastFinishedPulling="2026-01-30 13:59:54.493296672 +0000 UTC m=+1005.194645163" observedRunningTime="2026-01-30 13:59:55.120465548 +0000 UTC m=+1005.821814079" watchObservedRunningTime="2026-01-30 13:59:55.12708629 +0000 UTC m=+1005.828434821" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.166750 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.168494 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.170980 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.171131 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.185798 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.370077 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0262a970-62b2-47c1-93bf-1e4455a999bf-secret-volume\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.370362 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0262a970-62b2-47c1-93bf-1e4455a999bf-config-volume\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.370527 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8s55\" (UniqueName: \"kubernetes.io/projected/0262a970-62b2-47c1-93bf-1e4455a999bf-kube-api-access-t8s55\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.471234 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0262a970-62b2-47c1-93bf-1e4455a999bf-secret-volume\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.471579 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0262a970-62b2-47c1-93bf-1e4455a999bf-config-volume\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.471753 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8s55\" (UniqueName: \"kubernetes.io/projected/0262a970-62b2-47c1-93bf-1e4455a999bf-kube-api-access-t8s55\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.472519 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0262a970-62b2-47c1-93bf-1e4455a999bf-config-volume\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.480825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0262a970-62b2-47c1-93bf-1e4455a999bf-secret-volume\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.489585 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8s55\" (UniqueName: \"kubernetes.io/projected/0262a970-62b2-47c1-93bf-1e4455a999bf-kube-api-access-t8s55\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.541084 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2gwr6"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.542105 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.545956 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gdrsf" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.551252 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.551837 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.555326 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2gwr6"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.561654 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.572444 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/68bcadc4-02c3-44c0-a252-0606ff1f0a09-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.572496 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgstl\" (UniqueName: \"kubernetes.io/projected/1a7bdce5-b625-40ce-b674-a834fcd178a8-kube-api-access-sgstl\") pod \"nmstate-metrics-54757c584b-2gwr6\" (UID: \"1a7bdce5-b625-40ce-b674-a834fcd178a8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.572521 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpgff\" (UniqueName: \"kubernetes.io/projected/68bcadc4-02c3-44c0-a252-0606ff1f0a09-kube-api-access-vpgff\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.572597 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-dh9db"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.573321 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.595989 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.673640 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgstl\" (UniqueName: \"kubernetes.io/projected/1a7bdce5-b625-40ce-b674-a834fcd178a8-kube-api-access-sgstl\") pod \"nmstate-metrics-54757c584b-2gwr6\" (UID: \"1a7bdce5-b625-40ce-b674-a834fcd178a8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.673889 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpgff\" (UniqueName: \"kubernetes.io/projected/68bcadc4-02c3-44c0-a252-0606ff1f0a09-kube-api-access-vpgff\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.674023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/68bcadc4-02c3-44c0-a252-0606ff1f0a09-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:00 crc kubenswrapper[4793]: E0130 14:00:00.674212 4793 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 14:00:00 crc kubenswrapper[4793]: E0130 14:00:00.674329 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68bcadc4-02c3-44c0-a252-0606ff1f0a09-tls-key-pair podName:68bcadc4-02c3-44c0-a252-0606ff1f0a09 nodeName:}" failed. No retries permitted until 2026-01-30 14:00:01.174312836 +0000 UTC m=+1011.875661327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/68bcadc4-02c3-44c0-a252-0606ff1f0a09-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-hw489" (UID: "68bcadc4-02c3-44c0-a252-0606ff1f0a09") : secret "openshift-nmstate-webhook" not found Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.695931 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpgff\" (UniqueName: \"kubernetes.io/projected/68bcadc4-02c3-44c0-a252-0606ff1f0a09-kube-api-access-vpgff\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.706821 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgstl\" (UniqueName: \"kubernetes.io/projected/1a7bdce5-b625-40ce-b674-a834fcd178a8-kube-api-access-sgstl\") pod \"nmstate-metrics-54757c584b-2gwr6\" (UID: \"1a7bdce5-b625-40ce-b674-a834fcd178a8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.774796 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-ovs-socket\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.775214 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-dbus-socket\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.775329 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-nmstate-lock\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.775440 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsj2m\" (UniqueName: \"kubernetes.io/projected/e635e428-77d8-44fb-baa4-1af4bd603c10-kube-api-access-dsj2m\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.785437 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.829397 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.830426 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.837708 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wh5fk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.837785 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.837973 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.850959 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.865250 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.877915 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-ovs-socket\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.878010 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-dbus-socket\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.878102 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-nmstate-lock\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.878147 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsj2m\" (UniqueName: \"kubernetes.io/projected/e635e428-77d8-44fb-baa4-1af4bd603c10-kube-api-access-dsj2m\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.878544 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-dbus-socket\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.878630 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-nmstate-lock\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.878636 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-ovs-socket\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.913433 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsj2m\" (UniqueName: \"kubernetes.io/projected/e635e428-77d8-44fb-baa4-1af4bd603c10-kube-api-access-dsj2m\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.981090 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d85w8\" (UniqueName: \"kubernetes.io/projected/5df01042-63fe-458a-b71d-d1f9bdf9ea66-kube-api-access-d85w8\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.981175 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5df01042-63fe-458a-b71d-d1f9bdf9ea66-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.981213 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5df01042-63fe-458a-b71d-d1f9bdf9ea66-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.086207 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d85w8\" (UniqueName: \"kubernetes.io/projected/5df01042-63fe-458a-b71d-d1f9bdf9ea66-kube-api-access-d85w8\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.086738 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5df01042-63fe-458a-b71d-d1f9bdf9ea66-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.086776 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5df01042-63fe-458a-b71d-d1f9bdf9ea66-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.089658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5df01042-63fe-458a-b71d-d1f9bdf9ea66-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.111350 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5df01042-63fe-458a-b71d-d1f9bdf9ea66-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.132252 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5767d7b4df-v5z9l"] Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.134197 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d85w8\" (UniqueName: \"kubernetes.io/projected/5df01042-63fe-458a-b71d-d1f9bdf9ea66-kube-api-access-d85w8\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.138241 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.152953 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5767d7b4df-v5z9l"] Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.158635 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.188460 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/68bcadc4-02c3-44c0-a252-0606ff1f0a09-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.192791 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/68bcadc4-02c3-44c0-a252-0606ff1f0a09-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.193338 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:01 crc kubenswrapper[4793]: W0130 14:00:01.220584 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode635e428_77d8_44fb_baa4_1af4bd603c10.slice/crio-c13b50673f050bf855ff9570919519a96213c7580babfc5bf70bdfb54cb3f708 WatchSource:0}: Error finding container c13b50673f050bf855ff9570919519a96213c7580babfc5bf70bdfb54cb3f708: Status 404 returned error can't find the container with id c13b50673f050bf855ff9570919519a96213c7580babfc5bf70bdfb54cb3f708 Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.246359 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk"] Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.290815 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-trusted-ca-bundle\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.290909 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-config\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.290977 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lz5g\" (UniqueName: \"kubernetes.io/projected/369f339c-5894-4bda-8e5a-aa9ef1a8456c-kube-api-access-8lz5g\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.291018 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-service-ca\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.291088 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-serving-cert\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.291117 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-oauth-config\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.291185 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-oauth-serving-cert\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394271 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lz5g\" (UniqueName: \"kubernetes.io/projected/369f339c-5894-4bda-8e5a-aa9ef1a8456c-kube-api-access-8lz5g\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-service-ca\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394577 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-serving-cert\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394596 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-oauth-config\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394626 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-oauth-serving-cert\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394650 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-trusted-ca-bundle\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394682 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-config\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.396367 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-trusted-ca-bundle\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.396558 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-config\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.396764 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-service-ca\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.397279 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-oauth-serving-cert\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.398955 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-oauth-config\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.401235 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-serving-cert\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.418232 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lz5g\" (UniqueName: \"kubernetes.io/projected/369f339c-5894-4bda-8e5a-aa9ef1a8456c-kube-api-access-8lz5g\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.464786 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.471649 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft"] Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.474589 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:01 crc kubenswrapper[4793]: W0130 14:00:01.479360 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5df01042_63fe_458a_b71d_d1f9bdf9ea66.slice/crio-4cbc0c355d70f7809c85b48a2660dfc14be8a9a4ed00e20ae46be4e03fe915d3 WatchSource:0}: Error finding container 4cbc0c355d70f7809c85b48a2660dfc14be8a9a4ed00e20ae46be4e03fe915d3: Status 404 returned error can't find the container with id 4cbc0c355d70f7809c85b48a2660dfc14be8a9a4ed00e20ae46be4e03fe915d3 Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.520724 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2gwr6"] Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.795717 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5767d7b4df-v5z9l"] Jan 30 14:00:01 crc kubenswrapper[4793]: W0130 14:00:01.801848 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369f339c_5894_4bda_8e5a_aa9ef1a8456c.slice/crio-5a315ccacc881b6f4694bec75498b54a0a2709ea3226dcfce64d9c8b3375227f WatchSource:0}: Error finding container 5a315ccacc881b6f4694bec75498b54a0a2709ea3226dcfce64d9c8b3375227f: Status 404 returned error can't find the container with id 5a315ccacc881b6f4694bec75498b54a0a2709ea3226dcfce64d9c8b3375227f Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.850935 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489"] Jan 30 14:00:01 crc kubenswrapper[4793]: W0130 14:00:01.873461 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68bcadc4_02c3_44c0_a252_0606ff1f0a09.slice/crio-5b961beb4e1c8a306793310558f5be310911f219aaf1e8624108ad9e62a3b66d WatchSource:0}: Error finding container 5b961beb4e1c8a306793310558f5be310911f219aaf1e8624108ad9e62a3b66d: Status 404 returned error can't find the container with id 5b961beb4e1c8a306793310558f5be310911f219aaf1e8624108ad9e62a3b66d Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.148262 4793 generic.go:334] "Generic (PLEG): container finished" podID="0262a970-62b2-47c1-93bf-1e4455a999bf" containerID="21efee8d4521693281692f27a68228834ba45b6ab82173ff835a52b2e30855b1" exitCode=0 Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.148344 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" event={"ID":"0262a970-62b2-47c1-93bf-1e4455a999bf","Type":"ContainerDied","Data":"21efee8d4521693281692f27a68228834ba45b6ab82173ff835a52b2e30855b1"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.148409 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" event={"ID":"0262a970-62b2-47c1-93bf-1e4455a999bf","Type":"ContainerStarted","Data":"64c0c3a6986cd308648b3ad53f5fdb56a5e0c9ad5021668cc815471ffff6de56"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.149513 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dh9db" event={"ID":"e635e428-77d8-44fb-baa4-1af4bd603c10","Type":"ContainerStarted","Data":"c13b50673f050bf855ff9570919519a96213c7580babfc5bf70bdfb54cb3f708"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.152952 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" event={"ID":"68bcadc4-02c3-44c0-a252-0606ff1f0a09","Type":"ContainerStarted","Data":"5b961beb4e1c8a306793310558f5be310911f219aaf1e8624108ad9e62a3b66d"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.155834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5767d7b4df-v5z9l" event={"ID":"369f339c-5894-4bda-8e5a-aa9ef1a8456c","Type":"ContainerStarted","Data":"c1dd9263c27873f41299a7f96df549b99d19f3103391f2126d720071631ba670"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.155988 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5767d7b4df-v5z9l" event={"ID":"369f339c-5894-4bda-8e5a-aa9ef1a8456c","Type":"ContainerStarted","Data":"5a315ccacc881b6f4694bec75498b54a0a2709ea3226dcfce64d9c8b3375227f"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.158088 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" event={"ID":"1a7bdce5-b625-40ce-b674-a834fcd178a8","Type":"ContainerStarted","Data":"da81f03cdba551cc826e13e5619ff1eaca5dc68a3ce7c54b64edcb6017ada240"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.159575 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" event={"ID":"5df01042-63fe-458a-b71d-d1f9bdf9ea66","Type":"ContainerStarted","Data":"4cbc0c355d70f7809c85b48a2660dfc14be8a9a4ed00e20ae46be4e03fe915d3"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.194944 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5767d7b4df-v5z9l" podStartSLOduration=1.194927723 podStartE2EDuration="1.194927723s" podCreationTimestamp="2026-01-30 14:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:00:02.194259667 +0000 UTC m=+1012.895608158" watchObservedRunningTime="2026-01-30 14:00:02.194927723 +0000 UTC m=+1012.896276214" Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.742894 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.793747 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.983200 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9xlp"] Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.400598 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.527666 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0262a970-62b2-47c1-93bf-1e4455a999bf-config-volume\") pod \"0262a970-62b2-47c1-93bf-1e4455a999bf\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.527747 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0262a970-62b2-47c1-93bf-1e4455a999bf-secret-volume\") pod \"0262a970-62b2-47c1-93bf-1e4455a999bf\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.527858 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8s55\" (UniqueName: \"kubernetes.io/projected/0262a970-62b2-47c1-93bf-1e4455a999bf-kube-api-access-t8s55\") pod \"0262a970-62b2-47c1-93bf-1e4455a999bf\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.529835 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0262a970-62b2-47c1-93bf-1e4455a999bf-config-volume" (OuterVolumeSpecName: "config-volume") pod "0262a970-62b2-47c1-93bf-1e4455a999bf" (UID: "0262a970-62b2-47c1-93bf-1e4455a999bf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.534496 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0262a970-62b2-47c1-93bf-1e4455a999bf-kube-api-access-t8s55" (OuterVolumeSpecName: "kube-api-access-t8s55") pod "0262a970-62b2-47c1-93bf-1e4455a999bf" (UID: "0262a970-62b2-47c1-93bf-1e4455a999bf"). InnerVolumeSpecName "kube-api-access-t8s55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.535207 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0262a970-62b2-47c1-93bf-1e4455a999bf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0262a970-62b2-47c1-93bf-1e4455a999bf" (UID: "0262a970-62b2-47c1-93bf-1e4455a999bf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.629611 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8s55\" (UniqueName: \"kubernetes.io/projected/0262a970-62b2-47c1-93bf-1e4455a999bf-kube-api-access-t8s55\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.629672 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0262a970-62b2-47c1-93bf-1e4455a999bf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.630000 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0262a970-62b2-47c1-93bf-1e4455a999bf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.186624 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.186784 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" event={"ID":"0262a970-62b2-47c1-93bf-1e4455a999bf","Type":"ContainerDied","Data":"64c0c3a6986cd308648b3ad53f5fdb56a5e0c9ad5021668cc815471ffff6de56"} Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.186816 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64c0c3a6986cd308648b3ad53f5fdb56a5e0c9ad5021668cc815471ffff6de56" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.186911 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r9xlp" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="registry-server" containerID="cri-o://344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31" gracePeriod=2 Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.575668 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.647294 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-catalog-content\") pod \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.652343 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-utilities\") pod \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.652507 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knlxh\" (UniqueName: \"kubernetes.io/projected/8c59ec83-7715-4a59-a31b-b433cc9d77a7-kube-api-access-knlxh\") pod \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.653340 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-utilities" (OuterVolumeSpecName: "utilities") pod "8c59ec83-7715-4a59-a31b-b433cc9d77a7" (UID: "8c59ec83-7715-4a59-a31b-b433cc9d77a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.674239 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c59ec83-7715-4a59-a31b-b433cc9d77a7-kube-api-access-knlxh" (OuterVolumeSpecName: "kube-api-access-knlxh") pod "8c59ec83-7715-4a59-a31b-b433cc9d77a7" (UID: "8c59ec83-7715-4a59-a31b-b433cc9d77a7"). InnerVolumeSpecName "kube-api-access-knlxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.754433 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.754473 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knlxh\" (UniqueName: \"kubernetes.io/projected/8c59ec83-7715-4a59-a31b-b433cc9d77a7-kube-api-access-knlxh\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.783596 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c59ec83-7715-4a59-a31b-b433cc9d77a7" (UID: "8c59ec83-7715-4a59-a31b-b433cc9d77a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.855949 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.192833 4793 generic.go:334] "Generic (PLEG): container finished" podID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerID="344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31" exitCode=0 Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.192871 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerDied","Data":"344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31"} Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.192897 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerDied","Data":"a7f6cd11bf61597471d4b3cc7d761e75ee9fbc7009499720876fb6770586f0a7"} Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.192914 4793 scope.go:117] "RemoveContainer" containerID="344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.192912 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.226998 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9xlp"] Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.233931 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r9xlp"] Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.377561 4793 scope.go:117] "RemoveContainer" containerID="aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.403565 4793 scope.go:117] "RemoveContainer" containerID="dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.456060 4793 scope.go:117] "RemoveContainer" containerID="344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31" Jan 30 14:00:05 crc kubenswrapper[4793]: E0130 14:00:05.456598 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31\": container with ID starting with 344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31 not found: ID does not exist" containerID="344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.456630 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31"} err="failed to get container status \"344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31\": rpc error: code = NotFound desc = could not find container \"344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31\": container with ID starting with 344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31 not found: ID does not exist" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.456663 4793 scope.go:117] "RemoveContainer" containerID="aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514" Jan 30 14:00:05 crc kubenswrapper[4793]: E0130 14:00:05.457868 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514\": container with ID starting with aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514 not found: ID does not exist" containerID="aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.457889 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514"} err="failed to get container status \"aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514\": rpc error: code = NotFound desc = could not find container \"aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514\": container with ID starting with aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514 not found: ID does not exist" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.457903 4793 scope.go:117] "RemoveContainer" containerID="dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30" Jan 30 14:00:05 crc kubenswrapper[4793]: E0130 14:00:05.458265 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30\": container with ID starting with dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30 not found: ID does not exist" containerID="dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.458287 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30"} err="failed to get container status \"dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30\": rpc error: code = NotFound desc = could not find container \"dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30\": container with ID starting with dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30 not found: ID does not exist" Jan 30 14:00:06 crc kubenswrapper[4793]: I0130 14:00:06.198630 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" event={"ID":"68bcadc4-02c3-44c0-a252-0606ff1f0a09","Type":"ContainerStarted","Data":"4bf25963d2cd39801b243d4773e8508dcb28686cd0c45d63749828e61735a1c3"} Jan 30 14:00:06 crc kubenswrapper[4793]: I0130 14:00:06.198940 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:06 crc kubenswrapper[4793]: I0130 14:00:06.406339 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" path="/var/lib/kubelet/pods/8c59ec83-7715-4a59-a31b-b433cc9d77a7/volumes" Jan 30 14:00:09 crc kubenswrapper[4793]: I0130 14:00:09.218324 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dh9db" event={"ID":"e635e428-77d8-44fb-baa4-1af4bd603c10","Type":"ContainerStarted","Data":"1377f28a7f0b4a414b4b9738eef54a994c785687bcde1f5466f1e45c6e5cbb3f"} Jan 30 14:00:09 crc kubenswrapper[4793]: I0130 14:00:09.218856 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:09 crc kubenswrapper[4793]: I0130 14:00:09.230800 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" podStartSLOduration=5.642250926 podStartE2EDuration="9.23078338s" podCreationTimestamp="2026-01-30 14:00:00 +0000 UTC" firstStartedPulling="2026-01-30 14:00:01.876024085 +0000 UTC m=+1012.577372576" lastFinishedPulling="2026-01-30 14:00:05.464556539 +0000 UTC m=+1016.165905030" observedRunningTime="2026-01-30 14:00:06.220002239 +0000 UTC m=+1016.921350750" watchObservedRunningTime="2026-01-30 14:00:09.23078338 +0000 UTC m=+1019.932131871" Jan 30 14:00:10 crc kubenswrapper[4793]: I0130 14:00:10.418300 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-dh9db" podStartSLOduration=3.083622952 podStartE2EDuration="10.418279082s" podCreationTimestamp="2026-01-30 14:00:00 +0000 UTC" firstStartedPulling="2026-01-30 14:00:01.222658239 +0000 UTC m=+1011.924006730" lastFinishedPulling="2026-01-30 14:00:08.557314339 +0000 UTC m=+1019.258662860" observedRunningTime="2026-01-30 14:00:09.231664742 +0000 UTC m=+1019.933013253" watchObservedRunningTime="2026-01-30 14:00:10.418279082 +0000 UTC m=+1021.119627573" Jan 30 14:00:11 crc kubenswrapper[4793]: I0130 14:00:11.465425 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:11 crc kubenswrapper[4793]: I0130 14:00:11.465775 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:11 crc kubenswrapper[4793]: I0130 14:00:11.470156 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.241277 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.307171 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kknzc"] Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.413501 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.413563 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.413608 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.414208 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a70290c8d43e76215d2545599390db044bcef74601c3ab38a37df4fc1393ebad"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.414274 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://a70290c8d43e76215d2545599390db044bcef74601c3ab38a37df4fc1393ebad" gracePeriod=600 Jan 30 14:00:13 crc kubenswrapper[4793]: I0130 14:00:13.245365 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="a70290c8d43e76215d2545599390db044bcef74601c3ab38a37df4fc1393ebad" exitCode=0 Jan 30 14:00:13 crc kubenswrapper[4793]: I0130 14:00:13.245405 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"a70290c8d43e76215d2545599390db044bcef74601c3ab38a37df4fc1393ebad"} Jan 30 14:00:13 crc kubenswrapper[4793]: I0130 14:00:13.245863 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"2d2487d42ac1676516749d1fe7d34e7f815543009b077aded1798d3fcce33e28"} Jan 30 14:00:13 crc kubenswrapper[4793]: I0130 14:00:13.245888 4793 scope.go:117] "RemoveContainer" containerID="b9cf45bf1a50275470b74653bea158e128b7fd786c16cf7d32b21f4133fd1baa" Jan 30 14:00:15 crc kubenswrapper[4793]: I0130 14:00:15.265321 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" event={"ID":"1a7bdce5-b625-40ce-b674-a834fcd178a8","Type":"ContainerStarted","Data":"e30e718785f12382656876fa7585be638cfe0dd79889855f5a994ced8033d38d"} Jan 30 14:00:16 crc kubenswrapper[4793]: I0130 14:00:16.218186 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:19 crc kubenswrapper[4793]: I0130 14:00:19.452502 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" event={"ID":"1a7bdce5-b625-40ce-b674-a834fcd178a8","Type":"ContainerStarted","Data":"058b6d62cbb40fce810098a2d0261de1aba5023da85e8fa2a79824ddb5096f7f"} Jan 30 14:00:19 crc kubenswrapper[4793]: I0130 14:00:19.454469 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" event={"ID":"5df01042-63fe-458a-b71d-d1f9bdf9ea66","Type":"ContainerStarted","Data":"82d3f200b8bf09e3e0c6fa5be1702a767313348d3da5aac8f66bcd610f5a6bfa"} Jan 30 14:00:20 crc kubenswrapper[4793]: I0130 14:00:20.474614 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" podStartSLOduration=2.854421801 podStartE2EDuration="20.474598447s" podCreationTimestamp="2026-01-30 14:00:00 +0000 UTC" firstStartedPulling="2026-01-30 14:00:01.491445537 +0000 UTC m=+1012.192794028" lastFinishedPulling="2026-01-30 14:00:19.111622173 +0000 UTC m=+1029.812970674" observedRunningTime="2026-01-30 14:00:20.472384223 +0000 UTC m=+1031.173732734" watchObservedRunningTime="2026-01-30 14:00:20.474598447 +0000 UTC m=+1031.175946938" Jan 30 14:00:20 crc kubenswrapper[4793]: I0130 14:00:20.504334 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" podStartSLOduration=2.910273061 podStartE2EDuration="20.504315696s" podCreationTimestamp="2026-01-30 14:00:00 +0000 UTC" firstStartedPulling="2026-01-30 14:00:01.534665217 +0000 UTC m=+1012.236013708" lastFinishedPulling="2026-01-30 14:00:19.128707842 +0000 UTC m=+1029.830056343" observedRunningTime="2026-01-30 14:00:20.503638739 +0000 UTC m=+1031.204987260" watchObservedRunningTime="2026-01-30 14:00:20.504315696 +0000 UTC m=+1031.205664187" Jan 30 14:00:21 crc kubenswrapper[4793]: I0130 14:00:21.480008 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.573372 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j5rsz"] Jan 30 14:00:27 crc kubenswrapper[4793]: E0130 14:00:27.574306 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="extract-utilities" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.574328 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="extract-utilities" Jan 30 14:00:27 crc kubenswrapper[4793]: E0130 14:00:27.574346 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="extract-content" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.574358 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="extract-content" Jan 30 14:00:27 crc kubenswrapper[4793]: E0130 14:00:27.574378 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0262a970-62b2-47c1-93bf-1e4455a999bf" containerName="collect-profiles" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.574388 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0262a970-62b2-47c1-93bf-1e4455a999bf" containerName="collect-profiles" Jan 30 14:00:27 crc kubenswrapper[4793]: E0130 14:00:27.574411 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="registry-server" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.574420 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="registry-server" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.574575 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0262a970-62b2-47c1-93bf-1e4455a999bf" containerName="collect-profiles" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.574601 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="registry-server" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.575838 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.591704 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5rsz"] Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.617950 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-utilities\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.618095 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-catalog-content\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.618126 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mfbn\" (UniqueName: \"kubernetes.io/projected/94f70350-2f2a-41aa-900d-d42d13231186-kube-api-access-9mfbn\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.719430 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-catalog-content\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.719486 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mfbn\" (UniqueName: \"kubernetes.io/projected/94f70350-2f2a-41aa-900d-d42d13231186-kube-api-access-9mfbn\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.719519 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-utilities\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.719977 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-catalog-content\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.720092 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-utilities\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.741759 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mfbn\" (UniqueName: \"kubernetes.io/projected/94f70350-2f2a-41aa-900d-d42d13231186-kube-api-access-9mfbn\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.905691 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.380278 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5rsz"] Jan 30 14:00:28 crc kubenswrapper[4793]: W0130 14:00:28.396749 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94f70350_2f2a_41aa_900d_d42d13231186.slice/crio-07c6594f1106c2b711671cdfc1e7a231287d4f651dfde3fcb5e7d7f515ba7462 WatchSource:0}: Error finding container 07c6594f1106c2b711671cdfc1e7a231287d4f651dfde3fcb5e7d7f515ba7462: Status 404 returned error can't find the container with id 07c6594f1106c2b711671cdfc1e7a231287d4f651dfde3fcb5e7d7f515ba7462 Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.523155 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerStarted","Data":"07c6594f1106c2b711671cdfc1e7a231287d4f651dfde3fcb5e7d7f515ba7462"} Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.747231 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jsbqs"] Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.748254 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.776365 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsbqs"] Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.863243 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-catalog-content\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.863308 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9rc7\" (UniqueName: \"kubernetes.io/projected/31ef0a7f-aa60-4b86-b113-da5bc0614016-kube-api-access-k9rc7\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.863376 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-utilities\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.967120 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-catalog-content\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.967511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9rc7\" (UniqueName: \"kubernetes.io/projected/31ef0a7f-aa60-4b86-b113-da5bc0614016-kube-api-access-k9rc7\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.967573 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-utilities\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.967743 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-catalog-content\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.968173 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-utilities\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:29 crc kubenswrapper[4793]: I0130 14:00:28.991612 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9rc7\" (UniqueName: \"kubernetes.io/projected/31ef0a7f-aa60-4b86-b113-da5bc0614016-kube-api-access-k9rc7\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:29 crc kubenswrapper[4793]: I0130 14:00:29.105506 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:29 crc kubenswrapper[4793]: I0130 14:00:29.541437 4793 generic.go:334] "Generic (PLEG): container finished" podID="94f70350-2f2a-41aa-900d-d42d13231186" containerID="6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22" exitCode=0 Jan 30 14:00:29 crc kubenswrapper[4793]: I0130 14:00:29.542035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerDied","Data":"6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22"} Jan 30 14:00:29 crc kubenswrapper[4793]: I0130 14:00:29.691321 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsbqs"] Jan 30 14:00:30 crc kubenswrapper[4793]: I0130 14:00:30.550240 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerStarted","Data":"dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03"} Jan 30 14:00:30 crc kubenswrapper[4793]: I0130 14:00:30.553888 4793 generic.go:334] "Generic (PLEG): container finished" podID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerID="e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed" exitCode=0 Jan 30 14:00:30 crc kubenswrapper[4793]: I0130 14:00:30.553931 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerDied","Data":"e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed"} Jan 30 14:00:30 crc kubenswrapper[4793]: I0130 14:00:30.553976 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerStarted","Data":"398390322b79ae3539c03801cd1c80713e78c256487b16c885394a72c17c0058"} Jan 30 14:00:31 crc kubenswrapper[4793]: I0130 14:00:31.563334 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerStarted","Data":"461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a"} Jan 30 14:00:31 crc kubenswrapper[4793]: I0130 14:00:31.566262 4793 generic.go:334] "Generic (PLEG): container finished" podID="94f70350-2f2a-41aa-900d-d42d13231186" containerID="dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03" exitCode=0 Jan 30 14:00:31 crc kubenswrapper[4793]: I0130 14:00:31.566288 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerDied","Data":"dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03"} Jan 30 14:00:32 crc kubenswrapper[4793]: I0130 14:00:32.588618 4793 generic.go:334] "Generic (PLEG): container finished" podID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerID="461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a" exitCode=0 Jan 30 14:00:32 crc kubenswrapper[4793]: I0130 14:00:32.588716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerDied","Data":"461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a"} Jan 30 14:00:32 crc kubenswrapper[4793]: I0130 14:00:32.599713 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerStarted","Data":"6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f"} Jan 30 14:00:32 crc kubenswrapper[4793]: I0130 14:00:32.638969 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j5rsz" podStartSLOduration=3.108877059 podStartE2EDuration="5.638950609s" podCreationTimestamp="2026-01-30 14:00:27 +0000 UTC" firstStartedPulling="2026-01-30 14:00:29.545490167 +0000 UTC m=+1040.246838658" lastFinishedPulling="2026-01-30 14:00:32.075563717 +0000 UTC m=+1042.776912208" observedRunningTime="2026-01-30 14:00:32.637904523 +0000 UTC m=+1043.339253044" watchObservedRunningTime="2026-01-30 14:00:32.638950609 +0000 UTC m=+1043.340299100" Jan 30 14:00:33 crc kubenswrapper[4793]: I0130 14:00:33.607183 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerStarted","Data":"1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092"} Jan 30 14:00:33 crc kubenswrapper[4793]: I0130 14:00:33.630277 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jsbqs" podStartSLOduration=3.011142405 podStartE2EDuration="5.630262682s" podCreationTimestamp="2026-01-30 14:00:28 +0000 UTC" firstStartedPulling="2026-01-30 14:00:30.562537688 +0000 UTC m=+1041.263886179" lastFinishedPulling="2026-01-30 14:00:33.181657965 +0000 UTC m=+1043.883006456" observedRunningTime="2026-01-30 14:00:33.628190091 +0000 UTC m=+1044.329538592" watchObservedRunningTime="2026-01-30 14:00:33.630262682 +0000 UTC m=+1044.331611173" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.787833 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29"] Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.789918 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.792207 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.801258 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29"] Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.868486 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.868554 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.868636 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snb7m\" (UniqueName: \"kubernetes.io/projected/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-kube-api-access-snb7m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.969443 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snb7m\" (UniqueName: \"kubernetes.io/projected/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-kube-api-access-snb7m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.969500 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.969535 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.969933 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.969988 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.995337 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snb7m\" (UniqueName: \"kubernetes.io/projected/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-kube-api-access-snb7m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:36 crc kubenswrapper[4793]: I0130 14:00:36.121272 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:36 crc kubenswrapper[4793]: I0130 14:00:36.332645 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29"] Jan 30 14:00:36 crc kubenswrapper[4793]: W0130 14:00:36.337293 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bd35260_c3c5_4f56_b2ba_d47ca60144d8.slice/crio-eb50d2cd1d053f969b6a001bb5877c8b3fca79207a0ecc325147f2f1e2e298a2 WatchSource:0}: Error finding container eb50d2cd1d053f969b6a001bb5877c8b3fca79207a0ecc325147f2f1e2e298a2: Status 404 returned error can't find the container with id eb50d2cd1d053f969b6a001bb5877c8b3fca79207a0ecc325147f2f1e2e298a2 Jan 30 14:00:36 crc kubenswrapper[4793]: I0130 14:00:36.627809 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" event={"ID":"7bd35260-c3c5-4f56-b2ba-d47ca60144d8","Type":"ContainerStarted","Data":"878d99d7959e602dd8cc87e89ddc1c7c2bb3b8f3a1159a3fc592f63dc34a5c3a"} Jan 30 14:00:36 crc kubenswrapper[4793]: I0130 14:00:36.627862 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" event={"ID":"7bd35260-c3c5-4f56-b2ba-d47ca60144d8","Type":"ContainerStarted","Data":"eb50d2cd1d053f969b6a001bb5877c8b3fca79207a0ecc325147f2f1e2e298a2"} Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.352892 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-kknzc" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerName="console" containerID="cri-o://b72e6d29d1b411597eb5d49883f3b670ed4875b2819be1937cc8b9bc5e0bb53d" gracePeriod=15 Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.634456 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kknzc_69c74b2a-9812-42cf-90b7-b431e2b5f5cf/console/0.log" Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.634654 4793 generic.go:334] "Generic (PLEG): container finished" podID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerID="b72e6d29d1b411597eb5d49883f3b670ed4875b2819be1937cc8b9bc5e0bb53d" exitCode=2 Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.634749 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kknzc" event={"ID":"69c74b2a-9812-42cf-90b7-b431e2b5f5cf","Type":"ContainerDied","Data":"b72e6d29d1b411597eb5d49883f3b670ed4875b2819be1937cc8b9bc5e0bb53d"} Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.635972 4793 generic.go:334] "Generic (PLEG): container finished" podID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerID="878d99d7959e602dd8cc87e89ddc1c7c2bb3b8f3a1159a3fc592f63dc34a5c3a" exitCode=0 Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.636077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" event={"ID":"7bd35260-c3c5-4f56-b2ba-d47ca60144d8","Type":"ContainerDied","Data":"878d99d7959e602dd8cc87e89ddc1c7c2bb3b8f3a1159a3fc592f63dc34a5c3a"} Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.905977 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.906069 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.943898 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.560676 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kknzc_69c74b2a-9812-42cf-90b7-b431e2b5f5cf/console/0.log" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.561060 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.608397 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-trusted-ca-bundle\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.609473 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.609550 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-oauth-serving-cert\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.610086 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-oauth-config\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.610873 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.611021 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-config\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.611678 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w2cd\" (UniqueName: \"kubernetes.io/projected/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-kube-api-access-4w2cd\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.611736 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-serving-cert\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.611765 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-service-ca\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.611610 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-config" (OuterVolumeSpecName: "console-config") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.612414 4793 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.612432 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.612443 4793 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.612857 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-service-ca" (OuterVolumeSpecName: "service-ca") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.616028 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.616912 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.623628 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-kube-api-access-4w2cd" (OuterVolumeSpecName: "kube-api-access-4w2cd") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "kube-api-access-4w2cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.652079 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kknzc_69c74b2a-9812-42cf-90b7-b431e2b5f5cf/console/0.log" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.652386 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.652377 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kknzc" event={"ID":"69c74b2a-9812-42cf-90b7-b431e2b5f5cf","Type":"ContainerDied","Data":"333d1fe50b85de201d8359b376659ea922dde6cd7dc921f7d1df2397e061732e"} Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.652655 4793 scope.go:117] "RemoveContainer" containerID="b72e6d29d1b411597eb5d49883f3b670ed4875b2819be1937cc8b9bc5e0bb53d" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.695118 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kknzc"] Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.697592 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.702876 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-kknzc"] Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.713204 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.713239 4793 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.713251 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w2cd\" (UniqueName: \"kubernetes.io/projected/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-kube-api-access-4w2cd\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.713262 4793 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:39 crc kubenswrapper[4793]: I0130 14:00:39.105719 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:39 crc kubenswrapper[4793]: I0130 14:00:39.106080 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:39 crc kubenswrapper[4793]: I0130 14:00:39.163741 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:39 crc kubenswrapper[4793]: I0130 14:00:39.705416 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:40 crc kubenswrapper[4793]: I0130 14:00:40.406930 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" path="/var/lib/kubelet/pods/69c74b2a-9812-42cf-90b7-b431e2b5f5cf/volumes" Jan 30 14:00:40 crc kubenswrapper[4793]: I0130 14:00:40.668630 4793 generic.go:334] "Generic (PLEG): container finished" podID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerID="4f58c8c0c09b669a69ae5be230231a2d273759024ad947b4a71132c84b7c0ae0" exitCode=0 Jan 30 14:00:40 crc kubenswrapper[4793]: I0130 14:00:40.668720 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" event={"ID":"7bd35260-c3c5-4f56-b2ba-d47ca60144d8","Type":"ContainerDied","Data":"4f58c8c0c09b669a69ae5be230231a2d273759024ad947b4a71132c84b7c0ae0"} Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.129861 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5rsz"] Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.130363 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j5rsz" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="registry-server" containerID="cri-o://6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f" gracePeriod=2 Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.511078 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.556113 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-utilities\") pod \"94f70350-2f2a-41aa-900d-d42d13231186\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.556155 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-catalog-content\") pod \"94f70350-2f2a-41aa-900d-d42d13231186\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.556214 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mfbn\" (UniqueName: \"kubernetes.io/projected/94f70350-2f2a-41aa-900d-d42d13231186-kube-api-access-9mfbn\") pod \"94f70350-2f2a-41aa-900d-d42d13231186\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.557384 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-utilities" (OuterVolumeSpecName: "utilities") pod "94f70350-2f2a-41aa-900d-d42d13231186" (UID: "94f70350-2f2a-41aa-900d-d42d13231186"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.561541 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f70350-2f2a-41aa-900d-d42d13231186-kube-api-access-9mfbn" (OuterVolumeSpecName: "kube-api-access-9mfbn") pod "94f70350-2f2a-41aa-900d-d42d13231186" (UID: "94f70350-2f2a-41aa-900d-d42d13231186"). InnerVolumeSpecName "kube-api-access-9mfbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.657506 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.657537 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mfbn\" (UniqueName: \"kubernetes.io/projected/94f70350-2f2a-41aa-900d-d42d13231186-kube-api-access-9mfbn\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.677710 4793 generic.go:334] "Generic (PLEG): container finished" podID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerID="a19feb6d08a072aa80c9c8b9c5323dbdc049c25d5690e9bb77d8a86b28541886" exitCode=0 Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.677795 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" event={"ID":"7bd35260-c3c5-4f56-b2ba-d47ca60144d8","Type":"ContainerDied","Data":"a19feb6d08a072aa80c9c8b9c5323dbdc049c25d5690e9bb77d8a86b28541886"} Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.682256 4793 generic.go:334] "Generic (PLEG): container finished" podID="94f70350-2f2a-41aa-900d-d42d13231186" containerID="6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f" exitCode=0 Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.682303 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.682302 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerDied","Data":"6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f"} Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.682360 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerDied","Data":"07c6594f1106c2b711671cdfc1e7a231287d4f651dfde3fcb5e7d7f515ba7462"} Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.682383 4793 scope.go:117] "RemoveContainer" containerID="6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.701683 4793 scope.go:117] "RemoveContainer" containerID="dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.716860 4793 scope.go:117] "RemoveContainer" containerID="6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.740311 4793 scope.go:117] "RemoveContainer" containerID="6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f" Jan 30 14:00:41 crc kubenswrapper[4793]: E0130 14:00:41.740671 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f\": container with ID starting with 6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f not found: ID does not exist" containerID="6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.740707 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f"} err="failed to get container status \"6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f\": rpc error: code = NotFound desc = could not find container \"6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f\": container with ID starting with 6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f not found: ID does not exist" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.740727 4793 scope.go:117] "RemoveContainer" containerID="dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03" Jan 30 14:00:41 crc kubenswrapper[4793]: E0130 14:00:41.741479 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03\": container with ID starting with dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03 not found: ID does not exist" containerID="dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.741502 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03"} err="failed to get container status \"dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03\": rpc error: code = NotFound desc = could not find container \"dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03\": container with ID starting with dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03 not found: ID does not exist" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.741516 4793 scope.go:117] "RemoveContainer" containerID="6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22" Jan 30 14:00:41 crc kubenswrapper[4793]: E0130 14:00:41.742091 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22\": container with ID starting with 6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22 not found: ID does not exist" containerID="6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.742119 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22"} err="failed to get container status \"6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22\": rpc error: code = NotFound desc = could not find container \"6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22\": container with ID starting with 6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22 not found: ID does not exist" Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.348557 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94f70350-2f2a-41aa-900d-d42d13231186" (UID: "94f70350-2f2a-41aa-900d-d42d13231186"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.385534 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.599494 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5rsz"] Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.603585 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5rsz"] Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.730990 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsbqs"] Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.731298 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jsbqs" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="registry-server" containerID="cri-o://1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092" gracePeriod=2 Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.932326 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.993562 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-bundle\") pod \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.993610 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snb7m\" (UniqueName: \"kubernetes.io/projected/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-kube-api-access-snb7m\") pod \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.993654 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-util\") pod \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.995154 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-bundle" (OuterVolumeSpecName: "bundle") pod "7bd35260-c3c5-4f56-b2ba-d47ca60144d8" (UID: "7bd35260-c3c5-4f56-b2ba-d47ca60144d8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.002245 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-kube-api-access-snb7m" (OuterVolumeSpecName: "kube-api-access-snb7m") pod "7bd35260-c3c5-4f56-b2ba-d47ca60144d8" (UID: "7bd35260-c3c5-4f56-b2ba-d47ca60144d8"). InnerVolumeSpecName "kube-api-access-snb7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.008396 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-util" (OuterVolumeSpecName: "util") pod "7bd35260-c3c5-4f56-b2ba-d47ca60144d8" (UID: "7bd35260-c3c5-4f56-b2ba-d47ca60144d8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.095310 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.095337 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snb7m\" (UniqueName: \"kubernetes.io/projected/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-kube-api-access-snb7m\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.095348 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-util\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.708620 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" event={"ID":"7bd35260-c3c5-4f56-b2ba-d47ca60144d8","Type":"ContainerDied","Data":"eb50d2cd1d053f969b6a001bb5877c8b3fca79207a0ecc325147f2f1e2e298a2"} Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.708963 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb50d2cd1d053f969b6a001bb5877c8b3fca79207a0ecc325147f2f1e2e298a2" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.708747 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.406709 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94f70350-2f2a-41aa-900d-d42d13231186" path="/var/lib/kubelet/pods/94f70350-2f2a-41aa-900d-d42d13231186/volumes" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.532408 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.615314 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-utilities\") pod \"31ef0a7f-aa60-4b86-b113-da5bc0614016\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.615382 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-catalog-content\") pod \"31ef0a7f-aa60-4b86-b113-da5bc0614016\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.615434 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9rc7\" (UniqueName: \"kubernetes.io/projected/31ef0a7f-aa60-4b86-b113-da5bc0614016-kube-api-access-k9rc7\") pod \"31ef0a7f-aa60-4b86-b113-da5bc0614016\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.617103 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-utilities" (OuterVolumeSpecName: "utilities") pod "31ef0a7f-aa60-4b86-b113-da5bc0614016" (UID: "31ef0a7f-aa60-4b86-b113-da5bc0614016"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.621389 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ef0a7f-aa60-4b86-b113-da5bc0614016-kube-api-access-k9rc7" (OuterVolumeSpecName: "kube-api-access-k9rc7") pod "31ef0a7f-aa60-4b86-b113-da5bc0614016" (UID: "31ef0a7f-aa60-4b86-b113-da5bc0614016"). InnerVolumeSpecName "kube-api-access-k9rc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.675983 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31ef0a7f-aa60-4b86-b113-da5bc0614016" (UID: "31ef0a7f-aa60-4b86-b113-da5bc0614016"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.716379 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.716406 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.716422 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9rc7\" (UniqueName: \"kubernetes.io/projected/31ef0a7f-aa60-4b86-b113-da5bc0614016-kube-api-access-k9rc7\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.719304 4793 generic.go:334] "Generic (PLEG): container finished" podID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerID="1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092" exitCode=0 Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.719355 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerDied","Data":"1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092"} Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.719392 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerDied","Data":"398390322b79ae3539c03801cd1c80713e78c256487b16c885394a72c17c0058"} Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.719419 4793 scope.go:117] "RemoveContainer" containerID="1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.719593 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.759957 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsbqs"] Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.761476 4793 scope.go:117] "RemoveContainer" containerID="461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.763862 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jsbqs"] Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.779313 4793 scope.go:117] "RemoveContainer" containerID="e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.793169 4793 scope.go:117] "RemoveContainer" containerID="1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092" Jan 30 14:00:44 crc kubenswrapper[4793]: E0130 14:00:44.793523 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092\": container with ID starting with 1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092 not found: ID does not exist" containerID="1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.793562 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092"} err="failed to get container status \"1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092\": rpc error: code = NotFound desc = could not find container \"1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092\": container with ID starting with 1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092 not found: ID does not exist" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.793590 4793 scope.go:117] "RemoveContainer" containerID="461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a" Jan 30 14:00:44 crc kubenswrapper[4793]: E0130 14:00:44.793867 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a\": container with ID starting with 461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a not found: ID does not exist" containerID="461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.793929 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a"} err="failed to get container status \"461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a\": rpc error: code = NotFound desc = could not find container \"461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a\": container with ID starting with 461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a not found: ID does not exist" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.793963 4793 scope.go:117] "RemoveContainer" containerID="e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed" Jan 30 14:00:44 crc kubenswrapper[4793]: E0130 14:00:44.794337 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed\": container with ID starting with e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed not found: ID does not exist" containerID="e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.794368 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed"} err="failed to get container status \"e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed\": rpc error: code = NotFound desc = could not find container \"e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed\": container with ID starting with e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed not found: ID does not exist" Jan 30 14:00:46 crc kubenswrapper[4793]: I0130 14:00:46.405217 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" path="/var/lib/kubelet/pods/31ef0a7f-aa60-4b86-b113-da5bc0614016/volumes" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941311 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw"] Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941762 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="extract-content" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941774 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="extract-content" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941791 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="extract-content" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941798 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="extract-content" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941804 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="extract" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941810 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="extract" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941820 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="extract-utilities" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941826 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="extract-utilities" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941834 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="util" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941841 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="util" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941848 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="extract-utilities" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941855 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="extract-utilities" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941864 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="registry-server" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941870 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="registry-server" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941877 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="registry-server" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941883 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="registry-server" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941891 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="pull" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941896 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="pull" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941905 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerName="console" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941910 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerName="console" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.942025 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="registry-server" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.942033 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="extract" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.942064 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerName="console" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.942078 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="registry-server" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.942433 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.944607 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.944932 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.945170 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.945830 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.948294 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-9xc56" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.963768 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw"] Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.022978 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/75266e51-59ee-432d-b56a-ba972e5ff25b-apiservice-cert\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.023077 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv2xm\" (UniqueName: \"kubernetes.io/projected/75266e51-59ee-432d-b56a-ba972e5ff25b-kube-api-access-mv2xm\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.023246 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/75266e51-59ee-432d-b56a-ba972e5ff25b-webhook-cert\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.124169 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/75266e51-59ee-432d-b56a-ba972e5ff25b-webhook-cert\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.124260 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/75266e51-59ee-432d-b56a-ba972e5ff25b-apiservice-cert\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.124314 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv2xm\" (UniqueName: \"kubernetes.io/projected/75266e51-59ee-432d-b56a-ba972e5ff25b-kube-api-access-mv2xm\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.133594 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/75266e51-59ee-432d-b56a-ba972e5ff25b-apiservice-cert\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.144724 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv2xm\" (UniqueName: \"kubernetes.io/projected/75266e51-59ee-432d-b56a-ba972e5ff25b-kube-api-access-mv2xm\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.145658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/75266e51-59ee-432d-b56a-ba972e5ff25b-webhook-cert\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.259896 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.377109 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm"] Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.377907 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.380197 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-s8xbv" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.380479 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.384371 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.464854 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45949f1b-1075-4d7f-9007-8525e0364a55-webhook-cert\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.465135 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5sqk\" (UniqueName: \"kubernetes.io/projected/45949f1b-1075-4d7f-9007-8525e0364a55-kube-api-access-n5sqk\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.465227 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45949f1b-1075-4d7f-9007-8525e0364a55-apiservice-cert\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.529137 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm"] Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.567121 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45949f1b-1075-4d7f-9007-8525e0364a55-webhook-cert\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.567322 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5sqk\" (UniqueName: \"kubernetes.io/projected/45949f1b-1075-4d7f-9007-8525e0364a55-kube-api-access-n5sqk\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.568753 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45949f1b-1075-4d7f-9007-8525e0364a55-apiservice-cert\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.581772 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45949f1b-1075-4d7f-9007-8525e0364a55-apiservice-cert\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.618244 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45949f1b-1075-4d7f-9007-8525e0364a55-webhook-cert\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.623694 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5sqk\" (UniqueName: \"kubernetes.io/projected/45949f1b-1075-4d7f-9007-8525e0364a55-kube-api-access-n5sqk\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.718730 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.760210 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw"] Jan 30 14:00:52 crc kubenswrapper[4793]: W0130 14:00:52.770756 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75266e51_59ee_432d_b56a_ba972e5ff25b.slice/crio-08bb7b17d9c9bf73c6942c212867af712ee9590870e3995e442ac62abf727d6a WatchSource:0}: Error finding container 08bb7b17d9c9bf73c6942c212867af712ee9590870e3995e442ac62abf727d6a: Status 404 returned error can't find the container with id 08bb7b17d9c9bf73c6942c212867af712ee9590870e3995e442ac62abf727d6a Jan 30 14:00:53 crc kubenswrapper[4793]: I0130 14:00:53.223416 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm"] Jan 30 14:00:53 crc kubenswrapper[4793]: I0130 14:00:53.771536 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" event={"ID":"45949f1b-1075-4d7f-9007-8525e0364a55","Type":"ContainerStarted","Data":"81d327f9e4d091c903ed44b2db98cb10b84595ae7403eb29a1d2920048220390"} Jan 30 14:00:53 crc kubenswrapper[4793]: I0130 14:00:53.773227 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" event={"ID":"75266e51-59ee-432d-b56a-ba972e5ff25b","Type":"ContainerStarted","Data":"08bb7b17d9c9bf73c6942c212867af712ee9590870e3995e442ac62abf727d6a"} Jan 30 14:01:00 crc kubenswrapper[4793]: I0130 14:01:00.813907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" event={"ID":"45949f1b-1075-4d7f-9007-8525e0364a55","Type":"ContainerStarted","Data":"dbef2070ced1e914831bc297e4931170b201bc2f7f1e8591044ac25b8271cc4e"} Jan 30 14:01:00 crc kubenswrapper[4793]: I0130 14:01:00.814481 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:01:00 crc kubenswrapper[4793]: I0130 14:01:00.815961 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" event={"ID":"75266e51-59ee-432d-b56a-ba972e5ff25b","Type":"ContainerStarted","Data":"b46c2926e29b4e95f5f5d0040c3d731c6dae55acef58ff1dd29e79cd77ae5414"} Jan 30 14:01:00 crc kubenswrapper[4793]: I0130 14:01:00.816117 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:01:00 crc kubenswrapper[4793]: I0130 14:01:00.854667 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" podStartSLOduration=1.800471706 podStartE2EDuration="8.854653299s" podCreationTimestamp="2026-01-30 14:00:52 +0000 UTC" firstStartedPulling="2026-01-30 14:00:53.233640421 +0000 UTC m=+1063.934988912" lastFinishedPulling="2026-01-30 14:01:00.287822014 +0000 UTC m=+1070.989170505" observedRunningTime="2026-01-30 14:01:00.852743772 +0000 UTC m=+1071.554092263" watchObservedRunningTime="2026-01-30 14:01:00.854653299 +0000 UTC m=+1071.556001790" Jan 30 14:01:00 crc kubenswrapper[4793]: I0130 14:01:00.881562 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" podStartSLOduration=2.388075282 podStartE2EDuration="9.881540274s" podCreationTimestamp="2026-01-30 14:00:51 +0000 UTC" firstStartedPulling="2026-01-30 14:00:52.776831125 +0000 UTC m=+1063.478179616" lastFinishedPulling="2026-01-30 14:01:00.270296107 +0000 UTC m=+1070.971644608" observedRunningTime="2026-01-30 14:01:00.875348292 +0000 UTC m=+1071.576696783" watchObservedRunningTime="2026-01-30 14:01:00.881540274 +0000 UTC m=+1071.582888775" Jan 30 14:01:12 crc kubenswrapper[4793]: I0130 14:01:12.725676 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.264690 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.954390 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx"] Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.955221 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.960759 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-vsdkv"] Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.962295 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.962702 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vfh4l" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.963420 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.966264 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.977464 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx"] Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.982036 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.066600 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-g9hvr"] Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.067421 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.071975 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.072028 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-wpw4n" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.072127 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.072160 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.081647 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-7nlfd"] Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.083980 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.086326 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.096230 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7nlfd"] Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.115699 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-sockets\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.115764 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-reloader\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.115857 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-startup\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.115897 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.115948 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gxrl\" (UniqueName: \"kubernetes.io/projected/e5a76649-d081-4224-baca-095ca1ffadfd-kube-api-access-5gxrl\") pod \"frr-k8s-webhook-server-7df86c4f6c-4p6gx\" (UID: \"e5a76649-d081-4224-baca-095ca1ffadfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.115977 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5a76649-d081-4224-baca-095ca1ffadfd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4p6gx\" (UID: \"e5a76649-d081-4224-baca-095ca1ffadfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.116006 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-conf\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.116029 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics-certs\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.116064 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vn25\" (UniqueName: \"kubernetes.io/projected/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-kube-api-access-5vn25\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217486 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-startup\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217542 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217573 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-metrics-certs\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217599 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-metrics-certs\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217642 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gxrl\" (UniqueName: \"kubernetes.io/projected/e5a76649-d081-4224-baca-095ca1ffadfd-kube-api-access-5gxrl\") pod \"frr-k8s-webhook-server-7df86c4f6c-4p6gx\" (UID: \"e5a76649-d081-4224-baca-095ca1ffadfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5a76649-d081-4224-baca-095ca1ffadfd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4p6gx\" (UID: \"e5a76649-d081-4224-baca-095ca1ffadfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217686 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-conf\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217704 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics-certs\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217729 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vn25\" (UniqueName: \"kubernetes.io/projected/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-kube-api-access-5vn25\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217760 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tznfd\" (UniqueName: \"kubernetes.io/projected/519ea47c-0d76-44cb-af34-823c71e508c9-kube-api-access-tznfd\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217787 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqmgd\" (UniqueName: \"kubernetes.io/projected/34253a93-968b-47e2-aa0d-43ddb72f29f5-kube-api-access-nqmgd\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217807 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-sockets\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217831 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-cert\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217856 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-reloader\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217875 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/519ea47c-0d76-44cb-af34-823c71e508c9-metallb-excludel2\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217914 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.218004 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.218378 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-startup\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.218478 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-sockets\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.218551 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-reloader\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.218553 4793 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.218721 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-conf\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.218744 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics-certs podName:fd03c93b-a2a7-4a2f-9292-29c4e7fe9640 nodeName:}" failed. No retries permitted until 2026-01-30 14:01:33.71872899 +0000 UTC m=+1104.420077491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics-certs") pod "frr-k8s-vsdkv" (UID: "fd03c93b-a2a7-4a2f-9292-29c4e7fe9640") : secret "frr-k8s-certs-secret" not found Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.239751 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5a76649-d081-4224-baca-095ca1ffadfd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4p6gx\" (UID: \"e5a76649-d081-4224-baca-095ca1ffadfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.244542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vn25\" (UniqueName: \"kubernetes.io/projected/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-kube-api-access-5vn25\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.248176 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gxrl\" (UniqueName: \"kubernetes.io/projected/e5a76649-d081-4224-baca-095ca1ffadfd-kube-api-access-5gxrl\") pod \"frr-k8s-webhook-server-7df86c4f6c-4p6gx\" (UID: \"e5a76649-d081-4224-baca-095ca1ffadfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.271864 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-cert\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320491 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/519ea47c-0d76-44cb-af34-823c71e508c9-metallb-excludel2\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320546 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320598 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-metrics-certs\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320622 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-metrics-certs\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.320698 4793 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.320754 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist podName:519ea47c-0d76-44cb-af34-823c71e508c9 nodeName:}" failed. No retries permitted until 2026-01-30 14:01:33.820737864 +0000 UTC m=+1104.522086345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist") pod "speaker-g9hvr" (UID: "519ea47c-0d76-44cb-af34-823c71e508c9") : secret "metallb-memberlist" not found Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.320867 4793 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.320904 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-metrics-certs podName:34253a93-968b-47e2-aa0d-43ddb72f29f5 nodeName:}" failed. No retries permitted until 2026-01-30 14:01:33.820894348 +0000 UTC m=+1104.522242839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-metrics-certs") pod "controller-6968d8fdc4-7nlfd" (UID: "34253a93-968b-47e2-aa0d-43ddb72f29f5") : secret "controller-certs-secret" not found Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.320943 4793 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.320962 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-metrics-certs podName:519ea47c-0d76-44cb-af34-823c71e508c9 nodeName:}" failed. No retries permitted until 2026-01-30 14:01:33.82095565 +0000 UTC m=+1104.522304141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-metrics-certs") pod "speaker-g9hvr" (UID: "519ea47c-0d76-44cb-af34-823c71e508c9") : secret "speaker-certs-secret" not found Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320703 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tznfd\" (UniqueName: \"kubernetes.io/projected/519ea47c-0d76-44cb-af34-823c71e508c9-kube-api-access-tznfd\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320998 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqmgd\" (UniqueName: \"kubernetes.io/projected/34253a93-968b-47e2-aa0d-43ddb72f29f5-kube-api-access-nqmgd\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.321606 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.321608 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/519ea47c-0d76-44cb-af34-823c71e508c9-metallb-excludel2\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.334970 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-cert\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.338688 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tznfd\" (UniqueName: \"kubernetes.io/projected/519ea47c-0d76-44cb-af34-823c71e508c9-kube-api-access-tznfd\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.342033 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqmgd\" (UniqueName: \"kubernetes.io/projected/34253a93-968b-47e2-aa0d-43ddb72f29f5-kube-api-access-nqmgd\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.728509 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics-certs\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.736262 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics-certs\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.764557 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx"] Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.831086 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-metrics-certs\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.831172 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-metrics-certs\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.831292 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.831519 4793 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.831619 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist podName:519ea47c-0d76-44cb-af34-823c71e508c9 nodeName:}" failed. No retries permitted until 2026-01-30 14:01:34.831592306 +0000 UTC m=+1105.532940797 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist") pod "speaker-g9hvr" (UID: "519ea47c-0d76-44cb-af34-823c71e508c9") : secret "metallb-memberlist" not found Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.833835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-metrics-certs\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.833986 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-metrics-certs\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.886343 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.997753 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:34 crc kubenswrapper[4793]: I0130 14:01:34.009561 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" event={"ID":"e5a76649-d081-4224-baca-095ca1ffadfd","Type":"ContainerStarted","Data":"9b5146874d13f4d31f06aaddacf281561f7d46f6b077b48c51b9f000dcbd0d0e"} Jan 30 14:01:34 crc kubenswrapper[4793]: I0130 14:01:34.010681 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"7c95a59c48a92c8b366fdea9ed434d8bf644e5ffdfe2e07fd52e0c27e610d4ef"} Jan 30 14:01:34 crc kubenswrapper[4793]: I0130 14:01:34.391112 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7nlfd"] Jan 30 14:01:34 crc kubenswrapper[4793]: I0130 14:01:34.842860 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:34 crc kubenswrapper[4793]: I0130 14:01:34.852693 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:34 crc kubenswrapper[4793]: I0130 14:01:34.886853 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9hvr" Jan 30 14:01:35 crc kubenswrapper[4793]: I0130 14:01:35.031300 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7nlfd" event={"ID":"34253a93-968b-47e2-aa0d-43ddb72f29f5","Type":"ContainerStarted","Data":"dfe4279ae2d210bbf8bd9d5d3aa03cafb76b2fdf6ec4618b351487593e95ef25"} Jan 30 14:01:35 crc kubenswrapper[4793]: I0130 14:01:35.031582 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7nlfd" event={"ID":"34253a93-968b-47e2-aa0d-43ddb72f29f5","Type":"ContainerStarted","Data":"ae040937e950a1c01e1aa55941b17be8c87c194c59c5618d30f55e781e060b98"} Jan 30 14:01:35 crc kubenswrapper[4793]: I0130 14:01:35.031670 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7nlfd" event={"ID":"34253a93-968b-47e2-aa0d-43ddb72f29f5","Type":"ContainerStarted","Data":"a55ad93b00780cc06317a7b9db28a3a4c7a5e17111bf25afb1a36dafa8b69089"} Jan 30 14:01:35 crc kubenswrapper[4793]: I0130 14:01:35.031800 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:35 crc kubenswrapper[4793]: I0130 14:01:35.034070 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9hvr" event={"ID":"519ea47c-0d76-44cb-af34-823c71e508c9","Type":"ContainerStarted","Data":"9ba01354ad7958c4a9de1ad88f1cde32729059ade62d1aee9109e3b563002e03"} Jan 30 14:01:35 crc kubenswrapper[4793]: I0130 14:01:35.073844 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-7nlfd" podStartSLOduration=2.07382175 podStartE2EDuration="2.07382175s" podCreationTimestamp="2026-01-30 14:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:01:35.069894355 +0000 UTC m=+1105.771242856" watchObservedRunningTime="2026-01-30 14:01:35.07382175 +0000 UTC m=+1105.775170241" Jan 30 14:01:36 crc kubenswrapper[4793]: I0130 14:01:36.047211 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9hvr" event={"ID":"519ea47c-0d76-44cb-af34-823c71e508c9","Type":"ContainerStarted","Data":"1ee9c367594a5e421e3f6c274d3afcfc88807ccc5d199b8056f6b242eb22fa63"} Jan 30 14:01:36 crc kubenswrapper[4793]: I0130 14:01:36.047519 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-g9hvr" Jan 30 14:01:36 crc kubenswrapper[4793]: I0130 14:01:36.047531 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9hvr" event={"ID":"519ea47c-0d76-44cb-af34-823c71e508c9","Type":"ContainerStarted","Data":"ca87acd46560ec991e58acc711014a3627c02fb69a2e338aecda554a575aac37"} Jan 30 14:01:40 crc kubenswrapper[4793]: I0130 14:01:40.420061 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-g9hvr" podStartSLOduration=7.420030686 podStartE2EDuration="7.420030686s" podCreationTimestamp="2026-01-30 14:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:01:36.083505741 +0000 UTC m=+1106.784854232" watchObservedRunningTime="2026-01-30 14:01:40.420030686 +0000 UTC m=+1111.121379177" Jan 30 14:01:42 crc kubenswrapper[4793]: I0130 14:01:42.092508 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" event={"ID":"e5a76649-d081-4224-baca-095ca1ffadfd","Type":"ContainerStarted","Data":"6b23b23e36b036d21c9866e86f4bd4415a7380ce763e80d79b935aeba20ce3c5"} Jan 30 14:01:42 crc kubenswrapper[4793]: I0130 14:01:42.092828 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:42 crc kubenswrapper[4793]: I0130 14:01:42.094155 4793 generic.go:334] "Generic (PLEG): container finished" podID="fd03c93b-a2a7-4a2f-9292-29c4e7fe9640" containerID="5fb7a29a436be87a8e763d75695e072acdaf8c223e4c56b2767918ce48a6729d" exitCode=0 Jan 30 14:01:42 crc kubenswrapper[4793]: I0130 14:01:42.094191 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerDied","Data":"5fb7a29a436be87a8e763d75695e072acdaf8c223e4c56b2767918ce48a6729d"} Jan 30 14:01:42 crc kubenswrapper[4793]: I0130 14:01:42.118268 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" podStartSLOduration=2.025261752 podStartE2EDuration="10.118251886s" podCreationTimestamp="2026-01-30 14:01:32 +0000 UTC" firstStartedPulling="2026-01-30 14:01:33.772978988 +0000 UTC m=+1104.474327479" lastFinishedPulling="2026-01-30 14:01:41.865969122 +0000 UTC m=+1112.567317613" observedRunningTime="2026-01-30 14:01:42.113656273 +0000 UTC m=+1112.815004764" watchObservedRunningTime="2026-01-30 14:01:42.118251886 +0000 UTC m=+1112.819600377" Jan 30 14:01:43 crc kubenswrapper[4793]: I0130 14:01:43.113639 4793 generic.go:334] "Generic (PLEG): container finished" podID="fd03c93b-a2a7-4a2f-9292-29c4e7fe9640" containerID="edb8533e88a849f1bc20730726fbe83503bc548487a645c00ef105a432a537d9" exitCode=0 Jan 30 14:01:43 crc kubenswrapper[4793]: I0130 14:01:43.114229 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerDied","Data":"edb8533e88a849f1bc20730726fbe83503bc548487a645c00ef105a432a537d9"} Jan 30 14:01:44 crc kubenswrapper[4793]: I0130 14:01:44.121720 4793 generic.go:334] "Generic (PLEG): container finished" podID="fd03c93b-a2a7-4a2f-9292-29c4e7fe9640" containerID="cc22063a433ce7648df80092a5841177f3d98616c476be6534e1f35058b90b32" exitCode=0 Jan 30 14:01:44 crc kubenswrapper[4793]: I0130 14:01:44.121767 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerDied","Data":"cc22063a433ce7648df80092a5841177f3d98616c476be6534e1f35058b90b32"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.133661 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"401c2b5ead30e687f60f4190bd3d1b789c35a8d9e3ca757b388835ef5fa1fb62"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.133944 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"94de661169ac29ab4c772ec6fcc3de9a07e741647366c5bd7485a59b1e993bb2"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.133963 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.133975 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"829455f8b6a1962bc87cddb75f4aa4d1e13f7edab06ef6a93b948d66d5bbbdfe"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.133987 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"494e73aa0666e7b28870c90b627b6bf761e6bff3d3a4def4e212a20175893e3a"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.133999 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"8988296c867e968080559962b56c1726bac7a6dddb3743bef5827f83de1a5510"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.134010 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"40986c71bb128f32b68cfdba9fba550c525aec47170eb0a732f261df8d267654"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.171190 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-vsdkv" podStartSLOduration=5.341750885 podStartE2EDuration="13.171168819s" podCreationTimestamp="2026-01-30 14:01:32 +0000 UTC" firstStartedPulling="2026-01-30 14:01:33.997597999 +0000 UTC m=+1104.698946490" lastFinishedPulling="2026-01-30 14:01:41.827015923 +0000 UTC m=+1112.528364424" observedRunningTime="2026-01-30 14:01:45.167304984 +0000 UTC m=+1115.868653515" watchObservedRunningTime="2026-01-30 14:01:45.171168819 +0000 UTC m=+1115.872517330" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.576825 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-56nnw"] Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.592418 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.608614 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-56nnw"] Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.710239 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxjms\" (UniqueName: \"kubernetes.io/projected/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-kube-api-access-wxjms\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.710321 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-utilities\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.710356 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-catalog-content\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.811796 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-catalog-content\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.811871 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxjms\" (UniqueName: \"kubernetes.io/projected/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-kube-api-access-wxjms\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.811925 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-utilities\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.812349 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-utilities\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.812821 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-catalog-content\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.844473 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxjms\" (UniqueName: \"kubernetes.io/projected/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-kube-api-access-wxjms\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.932736 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:47 crc kubenswrapper[4793]: I0130 14:01:47.217431 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-56nnw"] Jan 30 14:01:48 crc kubenswrapper[4793]: I0130 14:01:48.159122 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerID="af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08" exitCode=0 Jan 30 14:01:48 crc kubenswrapper[4793]: I0130 14:01:48.159253 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56nnw" event={"ID":"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf","Type":"ContainerDied","Data":"af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08"} Jan 30 14:01:48 crc kubenswrapper[4793]: I0130 14:01:48.159421 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56nnw" event={"ID":"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf","Type":"ContainerStarted","Data":"d70b5154f44309d65bfea32a6c3d3a229ac334ced2b321492bb858f4e69e0990"} Jan 30 14:01:48 crc kubenswrapper[4793]: I0130 14:01:48.887327 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:48 crc kubenswrapper[4793]: I0130 14:01:48.930413 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:50 crc kubenswrapper[4793]: I0130 14:01:50.172044 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerID="98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923" exitCode=0 Jan 30 14:01:50 crc kubenswrapper[4793]: I0130 14:01:50.172115 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56nnw" event={"ID":"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf","Type":"ContainerDied","Data":"98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923"} Jan 30 14:01:51 crc kubenswrapper[4793]: I0130 14:01:51.180376 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56nnw" event={"ID":"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf","Type":"ContainerStarted","Data":"e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e"} Jan 30 14:01:51 crc kubenswrapper[4793]: I0130 14:01:51.210925 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-56nnw" podStartSLOduration=2.753086346 podStartE2EDuration="5.210908455s" podCreationTimestamp="2026-01-30 14:01:46 +0000 UTC" firstStartedPulling="2026-01-30 14:01:48.161007926 +0000 UTC m=+1118.862356417" lastFinishedPulling="2026-01-30 14:01:50.618830035 +0000 UTC m=+1121.320178526" observedRunningTime="2026-01-30 14:01:51.206501398 +0000 UTC m=+1121.907849929" watchObservedRunningTime="2026-01-30 14:01:51.210908455 +0000 UTC m=+1121.912256956" Jan 30 14:01:53 crc kubenswrapper[4793]: I0130 14:01:53.276433 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:54 crc kubenswrapper[4793]: I0130 14:01:54.001037 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:54 crc kubenswrapper[4793]: I0130 14:01:54.895348 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-g9hvr" Jan 30 14:01:56 crc kubenswrapper[4793]: I0130 14:01:56.933773 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:56 crc kubenswrapper[4793]: I0130 14:01:56.934014 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:56 crc kubenswrapper[4793]: I0130 14:01:56.992850 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:57 crc kubenswrapper[4793]: I0130 14:01:57.269827 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:57 crc kubenswrapper[4793]: I0130 14:01:57.306352 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-56nnw"] Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.234002 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-56nnw" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="registry-server" containerID="cri-o://e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e" gracePeriod=2 Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.606896 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.778823 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxjms\" (UniqueName: \"kubernetes.io/projected/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-kube-api-access-wxjms\") pod \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.779002 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-catalog-content\") pod \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.779069 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-utilities\") pod \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.780161 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-utilities" (OuterVolumeSpecName: "utilities") pod "f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" (UID: "f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.788204 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-kube-api-access-wxjms" (OuterVolumeSpecName: "kube-api-access-wxjms") pod "f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" (UID: "f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf"). InnerVolumeSpecName "kube-api-access-wxjms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.837566 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" (UID: "f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.880345 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.880404 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxjms\" (UniqueName: \"kubernetes.io/projected/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-kube-api-access-wxjms\") on node \"crc\" DevicePath \"\"" Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.880414 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.242610 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerID="e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e" exitCode=0 Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.242664 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56nnw" event={"ID":"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf","Type":"ContainerDied","Data":"e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e"} Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.242697 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56nnw" event={"ID":"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf","Type":"ContainerDied","Data":"d70b5154f44309d65bfea32a6c3d3a229ac334ced2b321492bb858f4e69e0990"} Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.242720 4793 scope.go:117] "RemoveContainer" containerID="e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.242776 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.266498 4793 scope.go:117] "RemoveContainer" containerID="98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.276456 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-56nnw"] Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.285527 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-56nnw"] Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.299660 4793 scope.go:117] "RemoveContainer" containerID="af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.319374 4793 scope.go:117] "RemoveContainer" containerID="e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e" Jan 30 14:02:00 crc kubenswrapper[4793]: E0130 14:02:00.319956 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e\": container with ID starting with e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e not found: ID does not exist" containerID="e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.319997 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e"} err="failed to get container status \"e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e\": rpc error: code = NotFound desc = could not find container \"e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e\": container with ID starting with e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e not found: ID does not exist" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.320026 4793 scope.go:117] "RemoveContainer" containerID="98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923" Jan 30 14:02:00 crc kubenswrapper[4793]: E0130 14:02:00.320522 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923\": container with ID starting with 98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923 not found: ID does not exist" containerID="98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.320566 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923"} err="failed to get container status \"98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923\": rpc error: code = NotFound desc = could not find container \"98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923\": container with ID starting with 98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923 not found: ID does not exist" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.320595 4793 scope.go:117] "RemoveContainer" containerID="af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08" Jan 30 14:02:00 crc kubenswrapper[4793]: E0130 14:02:00.322142 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08\": container with ID starting with af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08 not found: ID does not exist" containerID="af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.322175 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08"} err="failed to get container status \"af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08\": rpc error: code = NotFound desc = could not find container \"af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08\": container with ID starting with af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08 not found: ID does not exist" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.405671 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" path="/var/lib/kubelet/pods/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf/volumes" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.234840 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x56zx"] Jan 30 14:02:01 crc kubenswrapper[4793]: E0130 14:02:01.235756 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="extract-content" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.235861 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="extract-content" Jan 30 14:02:01 crc kubenswrapper[4793]: E0130 14:02:01.235964 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="extract-utilities" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.236030 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="extract-utilities" Jan 30 14:02:01 crc kubenswrapper[4793]: E0130 14:02:01.236174 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="registry-server" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.236278 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="registry-server" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.236483 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="registry-server" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.237073 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.238886 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-sl2qr" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.240093 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.240111 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.248617 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x56zx"] Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.300083 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbt5q\" (UniqueName: \"kubernetes.io/projected/e3b6e703-4540-4739-87cd-8699d4e04903-kube-api-access-mbt5q\") pod \"openstack-operator-index-x56zx\" (UID: \"e3b6e703-4540-4739-87cd-8699d4e04903\") " pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.400959 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbt5q\" (UniqueName: \"kubernetes.io/projected/e3b6e703-4540-4739-87cd-8699d4e04903-kube-api-access-mbt5q\") pod \"openstack-operator-index-x56zx\" (UID: \"e3b6e703-4540-4739-87cd-8699d4e04903\") " pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.428230 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbt5q\" (UniqueName: \"kubernetes.io/projected/e3b6e703-4540-4739-87cd-8699d4e04903-kube-api-access-mbt5q\") pod \"openstack-operator-index-x56zx\" (UID: \"e3b6e703-4540-4739-87cd-8699d4e04903\") " pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.560462 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.949854 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x56zx"] Jan 30 14:02:02 crc kubenswrapper[4793]: I0130 14:02:02.264843 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x56zx" event={"ID":"e3b6e703-4540-4739-87cd-8699d4e04903","Type":"ContainerStarted","Data":"226d55b90f69dab77f9d4235c816591b31c824d25f367bf23d510f0a1936f75c"} Jan 30 14:02:03 crc kubenswrapper[4793]: I0130 14:02:03.890364 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:02:05 crc kubenswrapper[4793]: I0130 14:02:05.289306 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x56zx" event={"ID":"e3b6e703-4540-4739-87cd-8699d4e04903","Type":"ContainerStarted","Data":"f708679f5ce339156245bdd2a083fd4fa03d7d616c7d2d83ab2a8b5931ea4852"} Jan 30 14:02:05 crc kubenswrapper[4793]: I0130 14:02:05.305952 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x56zx" podStartSLOduration=1.566048748 podStartE2EDuration="4.305934597s" podCreationTimestamp="2026-01-30 14:02:01 +0000 UTC" firstStartedPulling="2026-01-30 14:02:01.958360768 +0000 UTC m=+1132.659709269" lastFinishedPulling="2026-01-30 14:02:04.698246627 +0000 UTC m=+1135.399595118" observedRunningTime="2026-01-30 14:02:05.303960919 +0000 UTC m=+1136.005309440" watchObservedRunningTime="2026-01-30 14:02:05.305934597 +0000 UTC m=+1136.007283088" Jan 30 14:02:11 crc kubenswrapper[4793]: I0130 14:02:11.561648 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:11 crc kubenswrapper[4793]: I0130 14:02:11.562222 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:11 crc kubenswrapper[4793]: I0130 14:02:11.588441 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:12 crc kubenswrapper[4793]: I0130 14:02:12.350940 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:12 crc kubenswrapper[4793]: I0130 14:02:12.413768 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:02:12 crc kubenswrapper[4793]: I0130 14:02:12.413821 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.469503 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l"] Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.471236 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.474761 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wlc8d" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.479669 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l"] Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.672647 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthnl\" (UniqueName: \"kubernetes.io/projected/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-kube-api-access-tthnl\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.672715 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-bundle\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.672747 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-util\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.774034 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthnl\" (UniqueName: \"kubernetes.io/projected/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-kube-api-access-tthnl\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.774138 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-bundle\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.774166 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-util\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.774659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-bundle\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.774707 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-util\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.799870 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthnl\" (UniqueName: \"kubernetes.io/projected/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-kube-api-access-tthnl\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:15 crc kubenswrapper[4793]: I0130 14:02:15.096294 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:15 crc kubenswrapper[4793]: I0130 14:02:15.511298 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l"] Jan 30 14:02:16 crc kubenswrapper[4793]: I0130 14:02:16.351486 4793 generic.go:334] "Generic (PLEG): container finished" podID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerID="917aa863c9411cd87fd6db746368b2d0374fb47ded475d4d6f1c8c96e997d0aa" exitCode=0 Jan 30 14:02:16 crc kubenswrapper[4793]: I0130 14:02:16.351593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" event={"ID":"fa68ea40-d98a-4561-8dce-aa3e81fe5a96","Type":"ContainerDied","Data":"917aa863c9411cd87fd6db746368b2d0374fb47ded475d4d6f1c8c96e997d0aa"} Jan 30 14:02:16 crc kubenswrapper[4793]: I0130 14:02:16.351799 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" event={"ID":"fa68ea40-d98a-4561-8dce-aa3e81fe5a96","Type":"ContainerStarted","Data":"94f654f563720b16c4af45497f07bcd9437b1b119ad6e43f2e4b5fb59b7f5fa5"} Jan 30 14:02:18 crc kubenswrapper[4793]: I0130 14:02:18.370202 4793 generic.go:334] "Generic (PLEG): container finished" podID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerID="a2ca29c6644180959d91d56ab86c50b3648a6735e80885b2aa1ae3ac4af651ea" exitCode=0 Jan 30 14:02:18 crc kubenswrapper[4793]: I0130 14:02:18.370249 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" event={"ID":"fa68ea40-d98a-4561-8dce-aa3e81fe5a96","Type":"ContainerDied","Data":"a2ca29c6644180959d91d56ab86c50b3648a6735e80885b2aa1ae3ac4af651ea"} Jan 30 14:02:19 crc kubenswrapper[4793]: I0130 14:02:19.380005 4793 generic.go:334] "Generic (PLEG): container finished" podID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerID="9be403200582a47cb3f99a3a4e1fbd1249a57d1ec973d6ccd83c1f3684be0107" exitCode=0 Jan 30 14:02:19 crc kubenswrapper[4793]: I0130 14:02:19.380107 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" event={"ID":"fa68ea40-d98a-4561-8dce-aa3e81fe5a96","Type":"ContainerDied","Data":"9be403200582a47cb3f99a3a4e1fbd1249a57d1ec973d6ccd83c1f3684be0107"} Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.632028 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.751692 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tthnl\" (UniqueName: \"kubernetes.io/projected/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-kube-api-access-tthnl\") pod \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.751822 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-bundle\") pod \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.751861 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-util\") pod \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.752387 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-bundle" (OuterVolumeSpecName: "bundle") pod "fa68ea40-d98a-4561-8dce-aa3e81fe5a96" (UID: "fa68ea40-d98a-4561-8dce-aa3e81fe5a96"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.763783 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-kube-api-access-tthnl" (OuterVolumeSpecName: "kube-api-access-tthnl") pod "fa68ea40-d98a-4561-8dce-aa3e81fe5a96" (UID: "fa68ea40-d98a-4561-8dce-aa3e81fe5a96"). InnerVolumeSpecName "kube-api-access-tthnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.768317 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-util" (OuterVolumeSpecName: "util") pod "fa68ea40-d98a-4561-8dce-aa3e81fe5a96" (UID: "fa68ea40-d98a-4561-8dce-aa3e81fe5a96"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.855786 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tthnl\" (UniqueName: \"kubernetes.io/projected/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-kube-api-access-tthnl\") on node \"crc\" DevicePath \"\"" Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.855826 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.855848 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-util\") on node \"crc\" DevicePath \"\"" Jan 30 14:02:21 crc kubenswrapper[4793]: I0130 14:02:21.395994 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" event={"ID":"fa68ea40-d98a-4561-8dce-aa3e81fe5a96","Type":"ContainerDied","Data":"94f654f563720b16c4af45497f07bcd9437b1b119ad6e43f2e4b5fb59b7f5fa5"} Jan 30 14:02:21 crc kubenswrapper[4793]: I0130 14:02:21.396091 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:21 crc kubenswrapper[4793]: I0130 14:02:21.396114 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f654f563720b16c4af45497f07bcd9437b1b119ad6e43f2e4b5fb59b7f5fa5" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.791650 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd"] Jan 30 14:02:24 crc kubenswrapper[4793]: E0130 14:02:24.792079 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="pull" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.792109 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="pull" Jan 30 14:02:24 crc kubenswrapper[4793]: E0130 14:02:24.792127 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="extract" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.792133 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="extract" Jan 30 14:02:24 crc kubenswrapper[4793]: E0130 14:02:24.792147 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="util" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.792153 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="util" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.792264 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="extract" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.792621 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.795227 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-nl4zd" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.823829 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd"] Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.957671 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxwdn\" (UniqueName: \"kubernetes.io/projected/2cec3782-823b-4ddf-909a-e773203cd721-kube-api-access-vxwdn\") pod \"openstack-operator-controller-init-977cfdb67-sp4rd\" (UID: \"2cec3782-823b-4ddf-909a-e773203cd721\") " pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:25 crc kubenswrapper[4793]: I0130 14:02:25.059535 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxwdn\" (UniqueName: \"kubernetes.io/projected/2cec3782-823b-4ddf-909a-e773203cd721-kube-api-access-vxwdn\") pod \"openstack-operator-controller-init-977cfdb67-sp4rd\" (UID: \"2cec3782-823b-4ddf-909a-e773203cd721\") " pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:25 crc kubenswrapper[4793]: I0130 14:02:25.088118 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxwdn\" (UniqueName: \"kubernetes.io/projected/2cec3782-823b-4ddf-909a-e773203cd721-kube-api-access-vxwdn\") pod \"openstack-operator-controller-init-977cfdb67-sp4rd\" (UID: \"2cec3782-823b-4ddf-909a-e773203cd721\") " pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:25 crc kubenswrapper[4793]: I0130 14:02:25.109759 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:25 crc kubenswrapper[4793]: I0130 14:02:25.644285 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd"] Jan 30 14:02:26 crc kubenswrapper[4793]: I0130 14:02:26.429610 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" event={"ID":"2cec3782-823b-4ddf-909a-e773203cd721","Type":"ContainerStarted","Data":"36eba34a55476c58bc4d8b188b293d9323ab5932c2ff24e77e6d450f745e8661"} Jan 30 14:02:30 crc kubenswrapper[4793]: I0130 14:02:30.469430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" event={"ID":"2cec3782-823b-4ddf-909a-e773203cd721","Type":"ContainerStarted","Data":"c859e3c068c9bd897c5311ad1b1ea39e519eae368b7bbe2936f5bf181bbf8c4b"} Jan 30 14:02:30 crc kubenswrapper[4793]: I0130 14:02:30.470375 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:30 crc kubenswrapper[4793]: I0130 14:02:30.505694 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" podStartSLOduration=2.131778723 podStartE2EDuration="6.505678078s" podCreationTimestamp="2026-01-30 14:02:24 +0000 UTC" firstStartedPulling="2026-01-30 14:02:25.670553309 +0000 UTC m=+1156.371901810" lastFinishedPulling="2026-01-30 14:02:30.044452624 +0000 UTC m=+1160.745801165" observedRunningTime="2026-01-30 14:02:30.50086212 +0000 UTC m=+1161.202210621" watchObservedRunningTime="2026-01-30 14:02:30.505678078 +0000 UTC m=+1161.207026569" Jan 30 14:02:35 crc kubenswrapper[4793]: I0130 14:02:35.113776 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:42 crc kubenswrapper[4793]: I0130 14:02:42.414347 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:02:42 crc kubenswrapper[4793]: I0130 14:02:42.414884 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.225921 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.227195 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.231867 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-2zhj7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.240679 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.241471 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.251253 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-z8x8b" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.270040 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.270893 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.275972 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-slkkb" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.307960 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.317662 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-g5848"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.318372 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.324013 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-l44rg" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.326983 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl4hd\" (UniqueName: \"kubernetes.io/projected/1d859404-a29c-46c9-b66a-fed5ff0b13f0-kube-api-access-jl4hd\") pod \"glance-operator-controller-manager-8886f4c47-g5848\" (UID: \"1d859404-a29c-46c9-b66a-fed5ff0b13f0\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.327140 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8bkt\" (UniqueName: \"kubernetes.io/projected/8835e5d9-c37d-4744-95cb-c56c10a58647-kube-api-access-l8bkt\") pod \"cinder-operator-controller-manager-8d874c8fc-9kwwr\" (UID: \"8835e5d9-c37d-4744-95cb-c56c10a58647\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.327161 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tw28\" (UniqueName: \"kubernetes.io/projected/ec981da4-a3ba-4e4e-a0eb-2168ab79fe77-kube-api-access-5tw28\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-8bg6c\" (UID: \"ec981da4-a3ba-4e4e-a0eb-2168ab79fe77\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.327233 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmpv7\" (UniqueName: \"kubernetes.io/projected/6f991e04-2db3-4b32-bc83-8bbce4ce7a08-kube-api-access-wmpv7\") pod \"designate-operator-controller-manager-6d9697b7f4-hjpkr\" (UID: \"6f991e04-2db3-4b32-bc83-8bbce4ce7a08\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.348931 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.349812 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.352917 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-fdblm" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.377160 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.377952 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.381936 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-pbsph" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.385106 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.397203 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.422396 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-khfs7"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.423290 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.427650 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ct9pn" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.427800 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.428756 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430142 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j72j\" (UniqueName: \"kubernetes.io/projected/8d24cd33-2902-424a-8ffc-76b1e4c2f482-kube-api-access-9j72j\") pod \"heat-operator-controller-manager-69d6db494d-k4tz9\" (UID: \"8d24cd33-2902-424a-8ffc-76b1e4c2f482\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430320 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ptwr\" (UniqueName: \"kubernetes.io/projected/710c57e4-a09e-4db1-a03b-13db05085d41-kube-api-access-4ptwr\") pod \"horizon-operator-controller-manager-5fb775575f-m4q78\" (UID: \"710c57e4-a09e-4db1-a03b-13db05085d41\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430412 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bkt\" (UniqueName: \"kubernetes.io/projected/8835e5d9-c37d-4744-95cb-c56c10a58647-kube-api-access-l8bkt\") pod \"cinder-operator-controller-manager-8d874c8fc-9kwwr\" (UID: \"8835e5d9-c37d-4744-95cb-c56c10a58647\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430497 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tw28\" (UniqueName: \"kubernetes.io/projected/ec981da4-a3ba-4e4e-a0eb-2168ab79fe77-kube-api-access-5tw28\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-8bg6c\" (UID: \"ec981da4-a3ba-4e4e-a0eb-2168ab79fe77\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430598 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vzd2\" (UniqueName: \"kubernetes.io/projected/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-kube-api-access-7vzd2\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430711 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmpv7\" (UniqueName: \"kubernetes.io/projected/6f991e04-2db3-4b32-bc83-8bbce4ce7a08-kube-api-access-wmpv7\") pod \"designate-operator-controller-manager-6d9697b7f4-hjpkr\" (UID: \"6f991e04-2db3-4b32-bc83-8bbce4ce7a08\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430811 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl4hd\" (UniqueName: \"kubernetes.io/projected/1d859404-a29c-46c9-b66a-fed5ff0b13f0-kube-api-access-jl4hd\") pod \"glance-operator-controller-manager-8886f4c47-g5848\" (UID: \"1d859404-a29c-46c9-b66a-fed5ff0b13f0\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.441191 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-khfs7"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.469913 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.475438 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bkt\" (UniqueName: \"kubernetes.io/projected/8835e5d9-c37d-4744-95cb-c56c10a58647-kube-api-access-l8bkt\") pod \"cinder-operator-controller-manager-8d874c8fc-9kwwr\" (UID: \"8835e5d9-c37d-4744-95cb-c56c10a58647\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.489390 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-g5848"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.489923 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmpv7\" (UniqueName: \"kubernetes.io/projected/6f991e04-2db3-4b32-bc83-8bbce4ce7a08-kube-api-access-wmpv7\") pod \"designate-operator-controller-manager-6d9697b7f4-hjpkr\" (UID: \"6f991e04-2db3-4b32-bc83-8bbce4ce7a08\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.492860 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.515456 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.520876 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl4hd\" (UniqueName: \"kubernetes.io/projected/1d859404-a29c-46c9-b66a-fed5ff0b13f0-kube-api-access-jl4hd\") pod \"glance-operator-controller-manager-8886f4c47-g5848\" (UID: \"1d859404-a29c-46c9-b66a-fed5ff0b13f0\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.521487 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tw28\" (UniqueName: \"kubernetes.io/projected/ec981da4-a3ba-4e4e-a0eb-2168ab79fe77-kube-api-access-5tw28\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-8bg6c\" (UID: \"ec981da4-a3ba-4e4e-a0eb-2168ab79fe77\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.542920 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-2xtcj" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.552470 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.568668 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vzd2\" (UniqueName: \"kubernetes.io/projected/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-kube-api-access-7vzd2\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.568840 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.568870 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j72j\" (UniqueName: \"kubernetes.io/projected/8d24cd33-2902-424a-8ffc-76b1e4c2f482-kube-api-access-9j72j\") pod \"heat-operator-controller-manager-69d6db494d-k4tz9\" (UID: \"8d24cd33-2902-424a-8ffc-76b1e4c2f482\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.568952 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ptwr\" (UniqueName: \"kubernetes.io/projected/710c57e4-a09e-4db1-a03b-13db05085d41-kube-api-access-4ptwr\") pod \"horizon-operator-controller-manager-5fb775575f-m4q78\" (UID: \"710c57e4-a09e-4db1-a03b-13db05085d41\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:02:52 crc kubenswrapper[4793]: E0130 14:02:52.573038 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:52 crc kubenswrapper[4793]: E0130 14:02:52.575478 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert podName:97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642 nodeName:}" failed. No retries permitted until 2026-01-30 14:02:53.075442347 +0000 UTC m=+1183.776790848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert") pod "infra-operator-controller-manager-79955696d6-khfs7" (UID: "97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642") : secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.577360 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.628553 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.634627 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.657312 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.676448 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdkv6\" (UniqueName: \"kubernetes.io/projected/7c34e714-0f18-4e41-ab9c-1dfe4859e644-kube-api-access-pdkv6\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v77jx\" (UID: \"7c34e714-0f18-4e41-ab9c-1dfe4859e644\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.677113 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.679143 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.679637 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j72j\" (UniqueName: \"kubernetes.io/projected/8d24cd33-2902-424a-8ffc-76b1e4c2f482-kube-api-access-9j72j\") pod \"heat-operator-controller-manager-69d6db494d-k4tz9\" (UID: \"8d24cd33-2902-424a-8ffc-76b1e4c2f482\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.679658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ptwr\" (UniqueName: \"kubernetes.io/projected/710c57e4-a09e-4db1-a03b-13db05085d41-kube-api-access-4ptwr\") pod \"horizon-operator-controller-manager-5fb775575f-m4q78\" (UID: \"710c57e4-a09e-4db1-a03b-13db05085d41\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.679946 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.690226 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.694509 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-b5dsj" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.697110 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vzd2\" (UniqueName: \"kubernetes.io/projected/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-kube-api-access-7vzd2\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.700292 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.701247 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.709482 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.710403 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-7kdf8" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.729581 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.730331 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.742353 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-ql2x2" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.766501 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.782669 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.783359 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.791498 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.802607 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7nrsc" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.804409 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.809296 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qctdf\" (UniqueName: \"kubernetes.io/projected/bdcd04f7-09fa-4b1b-8b99-3de61a28a337-kube-api-access-qctdf\") pod \"keystone-operator-controller-manager-84f48565d4-82cvq\" (UID: \"bdcd04f7-09fa-4b1b-8b99-3de61a28a337\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.809333 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kkgj\" (UniqueName: \"kubernetes.io/projected/05415bc7-22dc-4b15-a047-6ed62755638d-kube-api-access-6kkgj\") pod \"neutron-operator-controller-manager-585dbc889-x6pk6\" (UID: \"05415bc7-22dc-4b15-a047-6ed62755638d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.809357 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdkv6\" (UniqueName: \"kubernetes.io/projected/7c34e714-0f18-4e41-ab9c-1dfe4859e644-kube-api-access-pdkv6\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v77jx\" (UID: \"7c34e714-0f18-4e41-ab9c-1dfe4859e644\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.809450 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n66zm\" (UniqueName: \"kubernetes.io/projected/ce9be14f-8255-421e-91b4-a30fc5482ff4-kube-api-access-n66zm\") pod \"manila-operator-controller-manager-7dd968899f-9ftxd\" (UID: \"ce9be14f-8255-421e-91b4-a30fc5482ff4\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.847622 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.848469 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.851824 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-qrbz9" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.868741 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.911577 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdkv6\" (UniqueName: \"kubernetes.io/projected/7c34e714-0f18-4e41-ab9c-1dfe4859e644-kube-api-access-pdkv6\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v77jx\" (UID: \"7c34e714-0f18-4e41-ab9c-1dfe4859e644\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.912263 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zntmr\" (UniqueName: \"kubernetes.io/projected/31ca6ac1-d2da-4325-baa4-e18fc3514721-kube-api-access-zntmr\") pod \"nova-operator-controller-manager-55bff696bd-vtx9d\" (UID: \"31ca6ac1-d2da-4325-baa4-e18fc3514721\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.927275 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n66zm\" (UniqueName: \"kubernetes.io/projected/ce9be14f-8255-421e-91b4-a30fc5482ff4-kube-api-access-n66zm\") pod \"manila-operator-controller-manager-7dd968899f-9ftxd\" (UID: \"ce9be14f-8255-421e-91b4-a30fc5482ff4\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.928038 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qctdf\" (UniqueName: \"kubernetes.io/projected/bdcd04f7-09fa-4b1b-8b99-3de61a28a337-kube-api-access-qctdf\") pod \"keystone-operator-controller-manager-84f48565d4-82cvq\" (UID: \"bdcd04f7-09fa-4b1b-8b99-3de61a28a337\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.928227 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kkgj\" (UniqueName: \"kubernetes.io/projected/05415bc7-22dc-4b15-a047-6ed62755638d-kube-api-access-6kkgj\") pod \"neutron-operator-controller-manager-585dbc889-x6pk6\" (UID: \"05415bc7-22dc-4b15-a047-6ed62755638d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.928417 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wj8\" (UniqueName: \"kubernetes.io/projected/fa88d14c-0581-439c-9da1-f1123e41a65a-kube-api-access-t7wj8\") pod \"mariadb-operator-controller-manager-67bf948998-n29l5\" (UID: \"fa88d14c-0581-439c-9da1-f1123e41a65a\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.916204 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.929671 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.934787 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xdkjq" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.934906 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.959859 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.960820 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.964435 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n66zm\" (UniqueName: \"kubernetes.io/projected/ce9be14f-8255-421e-91b4-a30fc5482ff4-kube-api-access-n66zm\") pod \"manila-operator-controller-manager-7dd968899f-9ftxd\" (UID: \"ce9be14f-8255-421e-91b4-a30fc5482ff4\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.968861 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.974960 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.975987 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.979950 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.987140 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-szxt7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.987171 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.987321 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-spknc" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.991239 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qctdf\" (UniqueName: \"kubernetes.io/projected/bdcd04f7-09fa-4b1b-8b99-3de61a28a337-kube-api-access-qctdf\") pod \"keystone-operator-controller-manager-84f48565d4-82cvq\" (UID: \"bdcd04f7-09fa-4b1b-8b99-3de61a28a337\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.993289 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.994073 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kkgj\" (UniqueName: \"kubernetes.io/projected/05415bc7-22dc-4b15-a047-6ed62755638d-kube-api-access-6kkgj\") pod \"neutron-operator-controller-manager-585dbc889-x6pk6\" (UID: \"05415bc7-22dc-4b15-a047-6ed62755638d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.994407 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:52.997800 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-v7v88" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.019379 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.020225 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.022626 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.023617 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ssfbg" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.029550 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7wj8\" (UniqueName: \"kubernetes.io/projected/fa88d14c-0581-439c-9da1-f1123e41a65a-kube-api-access-t7wj8\") pod \"mariadb-operator-controller-manager-67bf948998-n29l5\" (UID: \"fa88d14c-0581-439c-9da1-f1123e41a65a\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.029607 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffcgl\" (UniqueName: \"kubernetes.io/projected/53576ec8-2f6d-4781-8906-726529cc6049-kube-api-access-ffcgl\") pod \"octavia-operator-controller-manager-6687f8d877-5nsr4\" (UID: \"53576ec8-2f6d-4781-8906-726529cc6049\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.029639 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.029661 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zntmr\" (UniqueName: \"kubernetes.io/projected/31ca6ac1-d2da-4325-baa4-e18fc3514721-kube-api-access-zntmr\") pod \"nova-operator-controller-manager-55bff696bd-vtx9d\" (UID: \"31ca6ac1-d2da-4325-baa4-e18fc3514721\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.029709 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trq5g\" (UniqueName: \"kubernetes.io/projected/6231ed92-57a8-4c48-9c75-e916940b22ea-kube-api-access-trq5g\") pod \"ovn-operator-controller-manager-788c46999f-4ml88\" (UID: \"6231ed92-57a8-4c48-9c75-e916940b22ea\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.029747 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn6f8\" (UniqueName: \"kubernetes.io/projected/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-kube-api-access-rn6f8\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.039599 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.057574 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.060348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.070586 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zntmr\" (UniqueName: \"kubernetes.io/projected/31ca6ac1-d2da-4325-baa4-e18fc3514721-kube-api-access-zntmr\") pod \"nova-operator-controller-manager-55bff696bd-vtx9d\" (UID: \"31ca6ac1-d2da-4325-baa4-e18fc3514721\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.071303 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7wj8\" (UniqueName: \"kubernetes.io/projected/fa88d14c-0581-439c-9da1-f1123e41a65a-kube-api-access-t7wj8\") pod \"mariadb-operator-controller-manager-67bf948998-n29l5\" (UID: \"fa88d14c-0581-439c-9da1-f1123e41a65a\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.087435 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.105868 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.117278 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.120979 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.131806 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trq5g\" (UniqueName: \"kubernetes.io/projected/6231ed92-57a8-4c48-9c75-e916940b22ea-kube-api-access-trq5g\") pod \"ovn-operator-controller-manager-788c46999f-4ml88\" (UID: \"6231ed92-57a8-4c48-9c75-e916940b22ea\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.131870 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.131917 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn6f8\" (UniqueName: \"kubernetes.io/projected/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-kube-api-access-rn6f8\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.131972 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffcgl\" (UniqueName: \"kubernetes.io/projected/53576ec8-2f6d-4781-8906-726529cc6049-kube-api-access-ffcgl\") pod \"octavia-operator-controller-manager-6687f8d877-5nsr4\" (UID: \"53576ec8-2f6d-4781-8906-726529cc6049\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.132036 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.132084 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmk66\" (UniqueName: \"kubernetes.io/projected/02b8e60c-3514-4d72-bde6-5af374a926b1-kube-api-access-jmk66\") pod \"placement-operator-controller-manager-5b964cf4cd-27flx\" (UID: \"02b8e60c-3514-4d72-bde6-5af374a926b1\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.132120 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7kmh\" (UniqueName: \"kubernetes.io/projected/3eb94c51-d506-4273-898b-dba537cabea6-kube-api-access-b7kmh\") pod \"swift-operator-controller-manager-68fc8c869-vxhpt\" (UID: \"3eb94c51-d506-4273-898b-dba537cabea6\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.132363 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-b45s5" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.132519 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr"] Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.132658 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.132714 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert podName:97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642 nodeName:}" failed. No retries permitted until 2026-01-30 14:02:54.132691284 +0000 UTC m=+1184.834039775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert") pod "infra-operator-controller-manager-79955696d6-khfs7" (UID: "97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642") : secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.132997 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.133078 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert podName:e446e97c-6e9f-4dc2-b5fd-fb63451fd326 nodeName:}" failed. No retries permitted until 2026-01-30 14:02:53.633038712 +0000 UTC m=+1184.334387293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" (UID: "e446e97c-6e9f-4dc2-b5fd-fb63451fd326") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.156411 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.157375 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.205319 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.216463 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.226440 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.227837 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn6f8\" (UniqueName: \"kubernetes.io/projected/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-kube-api-access-rn6f8\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.229150 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9ldd5" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.235841 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmk66\" (UniqueName: \"kubernetes.io/projected/02b8e60c-3514-4d72-bde6-5af374a926b1-kube-api-access-jmk66\") pod \"placement-operator-controller-manager-5b964cf4cd-27flx\" (UID: \"02b8e60c-3514-4d72-bde6-5af374a926b1\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.245172 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7kmh\" (UniqueName: \"kubernetes.io/projected/3eb94c51-d506-4273-898b-dba537cabea6-kube-api-access-b7kmh\") pod \"swift-operator-controller-manager-68fc8c869-vxhpt\" (UID: \"3eb94c51-d506-4273-898b-dba537cabea6\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.245360 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvwb8\" (UniqueName: \"kubernetes.io/projected/5e215cef-de14-424d-9028-a48bad979192-kube-api-access-nvwb8\") pod \"test-operator-controller-manager-56f8bfcd9f-qb5xp\" (UID: \"5e215cef-de14-424d-9028-a48bad979192\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.245456 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn565\" (UniqueName: \"kubernetes.io/projected/6b21b0ca-d506-4b1b-b6e1-06e2a96ae033-kube-api-access-qn565\") pod \"telemetry-operator-controller-manager-64b5b76f97-tv5vr\" (UID: \"6b21b0ca-d506-4b1b-b6e1-06e2a96ae033\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.246596 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trq5g\" (UniqueName: \"kubernetes.io/projected/6231ed92-57a8-4c48-9c75-e916940b22ea-kube-api-access-trq5g\") pod \"ovn-operator-controller-manager-788c46999f-4ml88\" (UID: \"6231ed92-57a8-4c48-9c75-e916940b22ea\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.257710 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-btjpp"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.262290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffcgl\" (UniqueName: \"kubernetes.io/projected/53576ec8-2f6d-4781-8906-726529cc6049-kube-api-access-ffcgl\") pod \"octavia-operator-controller-manager-6687f8d877-5nsr4\" (UID: \"53576ec8-2f6d-4781-8906-726529cc6049\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.269425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmk66\" (UniqueName: \"kubernetes.io/projected/02b8e60c-3514-4d72-bde6-5af374a926b1-kube-api-access-jmk66\") pod \"placement-operator-controller-manager-5b964cf4cd-27flx\" (UID: \"02b8e60c-3514-4d72-bde6-5af374a926b1\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.309869 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.311756 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.316293 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.320662 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.324929 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9qrnc" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.335439 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-btjpp"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.346856 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqqw\" (UniqueName: \"kubernetes.io/projected/f65e9448-ee4e-4f22-9bd7-ecf650cb36b5-kube-api-access-lpqqw\") pod \"watcher-operator-controller-manager-564965969-btjpp\" (UID: \"f65e9448-ee4e-4f22-9bd7-ecf650cb36b5\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.346919 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvwb8\" (UniqueName: \"kubernetes.io/projected/5e215cef-de14-424d-9028-a48bad979192-kube-api-access-nvwb8\") pod \"test-operator-controller-manager-56f8bfcd9f-qb5xp\" (UID: \"5e215cef-de14-424d-9028-a48bad979192\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.346943 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn565\" (UniqueName: \"kubernetes.io/projected/6b21b0ca-d506-4b1b-b6e1-06e2a96ae033-kube-api-access-qn565\") pod \"telemetry-operator-controller-manager-64b5b76f97-tv5vr\" (UID: \"6b21b0ca-d506-4b1b-b6e1-06e2a96ae033\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.349286 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7kmh\" (UniqueName: \"kubernetes.io/projected/3eb94c51-d506-4273-898b-dba537cabea6-kube-api-access-b7kmh\") pod \"swift-operator-controller-manager-68fc8c869-vxhpt\" (UID: \"3eb94c51-d506-4273-898b-dba537cabea6\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.368892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn565\" (UniqueName: \"kubernetes.io/projected/6b21b0ca-d506-4b1b-b6e1-06e2a96ae033-kube-api-access-qn565\") pod \"telemetry-operator-controller-manager-64b5b76f97-tv5vr\" (UID: \"6b21b0ca-d506-4b1b-b6e1-06e2a96ae033\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.377175 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvwb8\" (UniqueName: \"kubernetes.io/projected/5e215cef-de14-424d-9028-a48bad979192-kube-api-access-nvwb8\") pod \"test-operator-controller-manager-56f8bfcd9f-qb5xp\" (UID: \"5e215cef-de14-424d-9028-a48bad979192\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.377953 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.388938 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.408950 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.433496 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.434625 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.438798 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.439007 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-95jx4" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.439310 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.448394 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpqqw\" (UniqueName: \"kubernetes.io/projected/f65e9448-ee4e-4f22-9bd7-ecf650cb36b5-kube-api-access-lpqqw\") pod \"watcher-operator-controller-manager-564965969-btjpp\" (UID: \"f65e9448-ee4e-4f22-9bd7-ecf650cb36b5\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.462366 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.491751 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpqqw\" (UniqueName: \"kubernetes.io/projected/f65e9448-ee4e-4f22-9bd7-ecf650cb36b5-kube-api-access-lpqqw\") pod \"watcher-operator-controller-manager-564965969-btjpp\" (UID: \"f65e9448-ee4e-4f22-9bd7-ecf650cb36b5\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.518525 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.519438 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.526093 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-pzq5g" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.550943 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.550994 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dxt7\" (UniqueName: \"kubernetes.io/projected/e9854850-e645-4364-a471-bef994f8536c-kube-api-access-6dxt7\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.551013 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fdwz\" (UniqueName: \"kubernetes.io/projected/2aae677d-830b-44b8-a792-3d0b527aee89-kube-api-access-5fdwz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nb4g2\" (UID: \"2aae677d-830b-44b8-a792-3d0b527aee89\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.551039 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.551174 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.583516 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.656624 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.656925 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.656955 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dxt7\" (UniqueName: \"kubernetes.io/projected/e9854850-e645-4364-a471-bef994f8536c-kube-api-access-6dxt7\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.656976 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fdwz\" (UniqueName: \"kubernetes.io/projected/2aae677d-830b-44b8-a792-3d0b527aee89-kube-api-access-5fdwz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nb4g2\" (UID: \"2aae677d-830b-44b8-a792-3d0b527aee89\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.657000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.657155 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.657200 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:02:54.157186298 +0000 UTC m=+1184.858534789 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "metrics-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.657430 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.657461 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert podName:e446e97c-6e9f-4dc2-b5fd-fb63451fd326 nodeName:}" failed. No retries permitted until 2026-01-30 14:02:54.657445454 +0000 UTC m=+1185.358793945 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" (UID: "e446e97c-6e9f-4dc2-b5fd-fb63451fd326") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.657496 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.657514 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:02:54.157507995 +0000 UTC m=+1184.858856486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.687694 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.689879 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dxt7\" (UniqueName: \"kubernetes.io/projected/e9854850-e645-4364-a471-bef994f8536c-kube-api-access-6dxt7\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.690466 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fdwz\" (UniqueName: \"kubernetes.io/projected/2aae677d-830b-44b8-a792-3d0b527aee89-kube-api-access-5fdwz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nb4g2\" (UID: \"2aae677d-830b-44b8-a792-3d0b527aee89\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.786004 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.791076 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.827697 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.911375 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.057194 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-g5848"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.169685 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.169743 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.169774 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.169941 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.169945 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.169968 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.169993 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert podName:97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642 nodeName:}" failed. No retries permitted until 2026-01-30 14:02:56.1699793 +0000 UTC m=+1186.871327781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert") pod "infra-operator-controller-manager-79955696d6-khfs7" (UID: "97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642") : secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.170025 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:02:55.170001091 +0000 UTC m=+1185.871349642 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "webhook-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.170106 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:02:55.170038521 +0000 UTC m=+1185.871387112 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "metrics-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.436773 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.468600 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9"] Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.559435 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d24cd33_2902_424a_8ffc_76b1e4c2f482.slice/crio-48ac2860bb733077b18c0f7b9e3c3f267cbb64d710035863cff6a4b356598560 WatchSource:0}: Error finding container 48ac2860bb733077b18c0f7b9e3c3f267cbb64d710035863cff6a4b356598560: Status 404 returned error can't find the container with id 48ac2860bb733077b18c0f7b9e3c3f267cbb64d710035863cff6a4b356598560 Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.600571 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd"] Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.614170 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa88d14c_0581_439c_9da1_f1123e41a65a.slice/crio-0ea2d720f25ea0934ae07c6c4aecec4c0b367e3c1e17238c45915bcf529368a4 WatchSource:0}: Error finding container 0ea2d720f25ea0934ae07c6c4aecec4c0b367e3c1e17238c45915bcf529368a4: Status 404 returned error can't find the container with id 0ea2d720f25ea0934ae07c6c4aecec4c0b367e3c1e17238c45915bcf529368a4 Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.632711 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.659548 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5"] Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.662323 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53576ec8_2f6d_4781_8906_726529cc6049.slice/crio-d1968bfeff04a0b1986aa9ac08d280acb41307dfac2d7259328a41885c81e2af WatchSource:0}: Error finding container d1968bfeff04a0b1986aa9ac08d280acb41307dfac2d7259328a41885c81e2af: Status 404 returned error can't find the container with id d1968bfeff04a0b1986aa9ac08d280acb41307dfac2d7259328a41885c81e2af Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.670594 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq"] Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.684149 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05415bc7_22dc_4b15_a047_6ed62755638d.slice/crio-2b7a2176b3a78e18459fcda69d964cd416be021042c9434344a5670b8442e826 WatchSource:0}: Error finding container 2b7a2176b3a78e18459fcda69d964cd416be021042c9434344a5670b8442e826: Status 404 returned error can't find the container with id 2b7a2176b3a78e18459fcda69d964cd416be021042c9434344a5670b8442e826 Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.687594 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.688204 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.688392 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.689130 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert podName:e446e97c-6e9f-4dc2-b5fd-fb63451fd326 nodeName:}" failed. No retries permitted until 2026-01-30 14:02:56.68853153 +0000 UTC m=+1187.389880021 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" (UID: "e446e97c-6e9f-4dc2-b5fd-fb63451fd326") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.690470 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6231ed92_57a8_4c48_9c75_e916940b22ea.slice/crio-b94380d56581da8ae4c362a87f5e421d4e3294ab6840718c1ebed01f8c023673 WatchSource:0}: Error finding container b94380d56581da8ae4c362a87f5e421d4e3294ab6840718c1ebed01f8c023673: Status 404 returned error can't find the container with id b94380d56581da8ae4c362a87f5e421d4e3294ab6840718c1ebed01f8c023673 Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.694957 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aae677d_830b_44b8_a792_3d0b527aee89.slice/crio-d143199e1760599c54c05f152a8283ce45bdf63e384aabcbfa5d551bd5be9816 WatchSource:0}: Error finding container d143199e1760599c54c05f152a8283ce45bdf63e384aabcbfa5d551bd5be9816: Status 404 returned error can't find the container with id d143199e1760599c54c05f152a8283ce45bdf63e384aabcbfa5d551bd5be9816 Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.701460 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.715357 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6"] Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.721471 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trq5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-4ml88_openstack-operators(6231ed92-57a8-4c48-9c75-e916940b22ea): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.723407 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" podUID="6231ed92-57a8-4c48-9c75-e916940b22ea" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.738761 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88"] Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.745261 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-nb4g2_openstack-operators(2aae677d-830b-44b8-a792-3d0b527aee89): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.746565 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b21b0ca_d506_4b1b_b6e1_06e2a96ae033.slice/crio-9567dbcf6ae9046e00f013cb713d398f01b6af499987ce9bb46806a656bf7a7c WatchSource:0}: Error finding container 9567dbcf6ae9046e00f013cb713d398f01b6af499987ce9bb46806a656bf7a7c: Status 404 returned error can't find the container with id 9567dbcf6ae9046e00f013cb713d398f01b6af499987ce9bb46806a656bf7a7c Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.746615 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" podUID="2aae677d-830b-44b8-a792-3d0b527aee89" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.758261 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2"] Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.759010 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qn565,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-tv5vr_openstack-operators(6b21b0ca-d506-4b1b-b6e1-06e2a96ae033): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.760096 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" podUID="6b21b0ca-d506-4b1b-b6e1-06e2a96ae033" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.762715 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nvwb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-qb5xp_openstack-operators(5e215cef-de14-424d-9028-a48bad979192): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.763780 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" podUID="5e215cef-de14-424d-9028-a48bad979192" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.764815 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" event={"ID":"53576ec8-2f6d-4781-8906-726529cc6049","Type":"ContainerStarted","Data":"d1968bfeff04a0b1986aa9ac08d280acb41307dfac2d7259328a41885c81e2af"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.766715 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.768301 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" event={"ID":"bdcd04f7-09fa-4b1b-8b99-3de61a28a337","Type":"ContainerStarted","Data":"1d38710c86fb5e192aeb14540956d24656db1d48954b833c32e36e4cb9ce5b0d"} Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.768537 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lpqqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-btjpp_openstack-operators(f65e9448-ee4e-4f22-9bd7-ecf650cb36b5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.769644 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" podUID="f65e9448-ee4e-4f22-9bd7-ecf650cb36b5" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.770792 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" event={"ID":"ec981da4-a3ba-4e4e-a0eb-2168ab79fe77","Type":"ContainerStarted","Data":"f65d33231af656e5de4501b44ce1101798fdfa11173e1a209361899a47b40899"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.771598 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" event={"ID":"8d24cd33-2902-424a-8ffc-76b1e4c2f482","Type":"ContainerStarted","Data":"48ac2860bb733077b18c0f7b9e3c3f267cbb64d710035863cff6a4b356598560"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.773194 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.777631 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.777853 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" event={"ID":"fa88d14c-0581-439c-9da1-f1123e41a65a","Type":"ContainerStarted","Data":"0ea2d720f25ea0934ae07c6c4aecec4c0b367e3c1e17238c45915bcf529368a4"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.781875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" event={"ID":"6b21b0ca-d506-4b1b-b6e1-06e2a96ae033","Type":"ContainerStarted","Data":"9567dbcf6ae9046e00f013cb713d398f01b6af499987ce9bb46806a656bf7a7c"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.787588 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-btjpp"] Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.789393 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" podUID="6b21b0ca-d506-4b1b-b6e1-06e2a96ae033" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.790792 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" event={"ID":"6231ed92-57a8-4c48-9c75-e916940b22ea","Type":"ContainerStarted","Data":"b94380d56581da8ae4c362a87f5e421d4e3294ab6840718c1ebed01f8c023673"} Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.795835 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" podUID="6231ed92-57a8-4c48-9c75-e916940b22ea" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.795932 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" event={"ID":"6f991e04-2db3-4b32-bc83-8bbce4ce7a08","Type":"ContainerStarted","Data":"084c7f30b9ee8d5a0ee3b2f434e8e027007bd69df096a48cdd3517c90f12da7b"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.802997 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" event={"ID":"8835e5d9-c37d-4744-95cb-c56c10a58647","Type":"ContainerStarted","Data":"9b949a5b3cef31ec223df871bf3608c5eae084926f27907344d81bbc74673679"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.807928 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" event={"ID":"31ca6ac1-d2da-4325-baa4-e18fc3514721","Type":"ContainerStarted","Data":"be06a659c6d0e3fe4725ba323ec3085bbf746717b68d98d5bfe8acd5fa8709b8"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.809377 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" event={"ID":"1d859404-a29c-46c9-b66a-fed5ff0b13f0","Type":"ContainerStarted","Data":"903b1f5dd62c9bd3678b966b8221e9010776913365c5395a18b2d8922f047686"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.810800 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" event={"ID":"02b8e60c-3514-4d72-bde6-5af374a926b1","Type":"ContainerStarted","Data":"54c0133d98303667573b43bf7596ca633f8ec91b36f920d837da801afa6f8e99"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.814001 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" event={"ID":"05415bc7-22dc-4b15-a047-6ed62755638d","Type":"ContainerStarted","Data":"2b7a2176b3a78e18459fcda69d964cd416be021042c9434344a5670b8442e826"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.816203 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" event={"ID":"2aae677d-830b-44b8-a792-3d0b527aee89","Type":"ContainerStarted","Data":"d143199e1760599c54c05f152a8283ce45bdf63e384aabcbfa5d551bd5be9816"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.816705 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt"] Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.818665 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" podUID="2aae677d-830b-44b8-a792-3d0b527aee89" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.819625 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" event={"ID":"710c57e4-a09e-4db1-a03b-13db05085d41","Type":"ContainerStarted","Data":"4a75ad9e81d987662f6d439402d58e63420f7818d17550b22161b78009b8c1c6"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.823461 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" event={"ID":"7c34e714-0f18-4e41-ab9c-1dfe4859e644","Type":"ContainerStarted","Data":"c9b7879953162331770b5c3c1b2734204ffdaae76e50a6aba51675f4d73acdd4"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.825472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" event={"ID":"ce9be14f-8255-421e-91b4-a30fc5482ff4","Type":"ContainerStarted","Data":"5264eb0e11dbbccbf6732042af23f0fc227036f40a41883b2873b0ef8a50b4ce"} Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.830313 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b7kmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-vxhpt_openstack-operators(3eb94c51-d506-4273-898b-dba537cabea6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.831551 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" podUID="3eb94c51-d506-4273-898b-dba537cabea6" Jan 30 14:02:55 crc kubenswrapper[4793]: I0130 14:02:55.196701 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:55 crc kubenswrapper[4793]: I0130 14:02:55.197122 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.197251 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.197295 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:02:57.197282927 +0000 UTC m=+1187.898631408 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "metrics-server-cert" not found Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.197337 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.197368 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:02:57.197350449 +0000 UTC m=+1187.898698940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "webhook-server-cert" not found Jan 30 14:02:55 crc kubenswrapper[4793]: I0130 14:02:55.838852 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" event={"ID":"5e215cef-de14-424d-9028-a48bad979192","Type":"ContainerStarted","Data":"27ffa0b55c7fffa9a10f3884e6cd74d7d8dd8a29eb7e3983dc9aa667aa6653d5"} Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.841983 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" podUID="5e215cef-de14-424d-9028-a48bad979192" Jan 30 14:02:55 crc kubenswrapper[4793]: I0130 14:02:55.844846 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" event={"ID":"f65e9448-ee4e-4f22-9bd7-ecf650cb36b5","Type":"ContainerStarted","Data":"4d52683ce83ecfbefd34cf10d049265d36a877ab8e75a0c32263780971962732"} Jan 30 14:02:55 crc kubenswrapper[4793]: I0130 14:02:55.847106 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" event={"ID":"3eb94c51-d506-4273-898b-dba537cabea6","Type":"ContainerStarted","Data":"48cc938651f3825f9d91039109cb9b855313e932e22358a8ed5ec945990d8ce6"} Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.847388 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" podUID="f65e9448-ee4e-4f22-9bd7-ecf650cb36b5" Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.864402 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" podUID="2aae677d-830b-44b8-a792-3d0b527aee89" Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.864462 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" podUID="6231ed92-57a8-4c48-9c75-e916940b22ea" Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.868069 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" podUID="3eb94c51-d506-4273-898b-dba537cabea6" Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.868103 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" podUID="6b21b0ca-d506-4b1b-b6e1-06e2a96ae033" Jan 30 14:02:56 crc kubenswrapper[4793]: I0130 14:02:56.222025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.222190 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.222283 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert podName:97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642 nodeName:}" failed. No retries permitted until 2026-01-30 14:03:00.222264618 +0000 UTC m=+1190.923613169 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert") pod "infra-operator-controller-manager-79955696d6-khfs7" (UID: "97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642") : secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:56 crc kubenswrapper[4793]: I0130 14:02:56.730590 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.730795 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.731099 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert podName:e446e97c-6e9f-4dc2-b5fd-fb63451fd326 nodeName:}" failed. No retries permitted until 2026-01-30 14:03:00.731075415 +0000 UTC m=+1191.432423906 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" (UID: "e446e97c-6e9f-4dc2-b5fd-fb63451fd326") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.886632 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" podUID="3eb94c51-d506-4273-898b-dba537cabea6" Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.886772 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" podUID="5e215cef-de14-424d-9028-a48bad979192" Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.887029 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" podUID="f65e9448-ee4e-4f22-9bd7-ecf650cb36b5" Jan 30 14:02:57 crc kubenswrapper[4793]: I0130 14:02:57.238565 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:57 crc kubenswrapper[4793]: I0130 14:02:57.238629 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:57 crc kubenswrapper[4793]: E0130 14:02:57.238765 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 14:02:57 crc kubenswrapper[4793]: E0130 14:02:57.238806 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 14:02:57 crc kubenswrapper[4793]: E0130 14:02:57.238855 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:03:01.238833587 +0000 UTC m=+1191.940182068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "webhook-server-cert" not found Jan 30 14:02:57 crc kubenswrapper[4793]: E0130 14:02:57.238874 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:03:01.238868128 +0000 UTC m=+1191.940216619 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "metrics-server-cert" not found Jan 30 14:03:00 crc kubenswrapper[4793]: I0130 14:03:00.315820 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:00 crc kubenswrapper[4793]: E0130 14:03:00.315997 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 14:03:00 crc kubenswrapper[4793]: E0130 14:03:00.316264 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert podName:97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642 nodeName:}" failed. No retries permitted until 2026-01-30 14:03:08.316243179 +0000 UTC m=+1199.017591680 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert") pod "infra-operator-controller-manager-79955696d6-khfs7" (UID: "97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642") : secret "infra-operator-webhook-server-cert" not found Jan 30 14:03:00 crc kubenswrapper[4793]: I0130 14:03:00.822387 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:03:00 crc kubenswrapper[4793]: E0130 14:03:00.822593 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:03:00 crc kubenswrapper[4793]: E0130 14:03:00.822730 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert podName:e446e97c-6e9f-4dc2-b5fd-fb63451fd326 nodeName:}" failed. No retries permitted until 2026-01-30 14:03:08.82269789 +0000 UTC m=+1199.524046422 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" (UID: "e446e97c-6e9f-4dc2-b5fd-fb63451fd326") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:03:01 crc kubenswrapper[4793]: I0130 14:03:01.328987 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:01 crc kubenswrapper[4793]: I0130 14:03:01.329161 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:01 crc kubenswrapper[4793]: E0130 14:03:01.329170 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 14:03:01 crc kubenswrapper[4793]: E0130 14:03:01.329358 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:03:09.329325215 +0000 UTC m=+1200.030673746 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "webhook-server-cert" not found Jan 30 14:03:01 crc kubenswrapper[4793]: E0130 14:03:01.329215 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 14:03:01 crc kubenswrapper[4793]: E0130 14:03:01.329482 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:03:09.329451948 +0000 UTC m=+1200.030800439 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "metrics-server-cert" not found Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.098998 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.099748 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pdkv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-v77jx_openstack-operators(7c34e714-0f18-4e41-ab9c-1dfe4859e644): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.100924 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" podUID="7c34e714-0f18-4e41-ab9c-1dfe4859e644" Jan 30 14:03:08 crc kubenswrapper[4793]: I0130 14:03:08.335175 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.335379 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.335499 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert podName:97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642 nodeName:}" failed. No retries permitted until 2026-01-30 14:03:24.335479761 +0000 UTC m=+1215.036828302 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert") pod "infra-operator-controller-manager-79955696d6-khfs7" (UID: "97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642") : secret "infra-operator-webhook-server-cert" not found Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.648907 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.649466 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8bkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d874c8fc-9kwwr_openstack-operators(8835e5d9-c37d-4744-95cb-c56c10a58647): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.651368 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" podUID="8835e5d9-c37d-4744-95cb-c56c10a58647" Jan 30 14:03:08 crc kubenswrapper[4793]: I0130 14:03:08.846146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.846340 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.846413 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert podName:e446e97c-6e9f-4dc2-b5fd-fb63451fd326 nodeName:}" failed. No retries permitted until 2026-01-30 14:03:24.846391498 +0000 UTC m=+1215.547739979 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" (UID: "e446e97c-6e9f-4dc2-b5fd-fb63451fd326") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.951931 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" podUID="8835e5d9-c37d-4744-95cb-c56c10a58647" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.951836 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" podUID="7c34e714-0f18-4e41-ab9c-1dfe4859e644" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.243374 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.243662 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6kkgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-x6pk6_openstack-operators(05415bc7-22dc-4b15-a047-6ed62755638d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.244869 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" podUID="05415bc7-22dc-4b15-a047-6ed62755638d" Jan 30 14:03:09 crc kubenswrapper[4793]: I0130 14:03:09.352922 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:09 crc kubenswrapper[4793]: I0130 14:03:09.352997 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.353360 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.353419 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:03:25.353403393 +0000 UTC m=+1216.054751884 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "webhook-server-cert" not found Jan 30 14:03:09 crc kubenswrapper[4793]: I0130 14:03:09.371280 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.803179 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.803452 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jmk66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-27flx_openstack-operators(02b8e60c-3514-4d72-bde6-5af374a926b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.805730 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" podUID="02b8e60c-3514-4d72-bde6-5af374a926b1" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.957264 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" podUID="05415bc7-22dc-4b15-a047-6ed62755638d" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.957671 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" podUID="02b8e60c-3514-4d72-bde6-5af374a926b1" Jan 30 14:03:11 crc kubenswrapper[4793]: E0130 14:03:11.798483 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Jan 30 14:03:11 crc kubenswrapper[4793]: E0130 14:03:11.799178 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wmpv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-hjpkr_openstack-operators(6f991e04-2db3-4b32-bc83-8bbce4ce7a08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:11 crc kubenswrapper[4793]: E0130 14:03:11.801580 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" podUID="6f991e04-2db3-4b32-bc83-8bbce4ce7a08" Jan 30 14:03:11 crc kubenswrapper[4793]: E0130 14:03:11.969767 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" podUID="6f991e04-2db3-4b32-bc83-8bbce4ce7a08" Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.413541 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.413606 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.413646 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.414336 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d2487d42ac1676516749d1fe7d34e7f815543009b077aded1798d3fcce33e28"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.414396 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://2d2487d42ac1676516749d1fe7d34e7f815543009b077aded1798d3fcce33e28" gracePeriod=600 Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.976137 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="2d2487d42ac1676516749d1fe7d34e7f815543009b077aded1798d3fcce33e28" exitCode=0 Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.976192 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"2d2487d42ac1676516749d1fe7d34e7f815543009b077aded1798d3fcce33e28"} Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.976230 4793 scope.go:117] "RemoveContainer" containerID="a70290c8d43e76215d2545599390db044bcef74601c3ab38a37df4fc1393ebad" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.296132 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.296776 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t7wj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-n29l5_openstack-operators(fa88d14c-0581-439c-9da1-f1123e41a65a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.298116 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" podUID="fa88d14c-0581-439c-9da1-f1123e41a65a" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.863798 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.863973 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jl4hd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-g5848_openstack-operators(1d859404-a29c-46c9-b66a-fed5ff0b13f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.865931 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" podUID="1d859404-a29c-46c9-b66a-fed5ff0b13f0" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.995919 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" podUID="fa88d14c-0581-439c-9da1-f1123e41a65a" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.997524 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" podUID="1d859404-a29c-46c9-b66a-fed5ff0b13f0" Jan 30 14:03:17 crc kubenswrapper[4793]: E0130 14:03:17.341618 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 30 14:03:17 crc kubenswrapper[4793]: E0130 14:03:17.342105 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n66zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-9ftxd_openstack-operators(ce9be14f-8255-421e-91b4-a30fc5482ff4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:17 crc kubenswrapper[4793]: E0130 14:03:17.343836 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" podUID="ce9be14f-8255-421e-91b4-a30fc5482ff4" Jan 30 14:03:17 crc kubenswrapper[4793]: E0130 14:03:17.877971 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 30 14:03:17 crc kubenswrapper[4793]: E0130 14:03:17.878225 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qctdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-82cvq_openstack-operators(bdcd04f7-09fa-4b1b-8b99-3de61a28a337): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:17 crc kubenswrapper[4793]: E0130 14:03:17.879961 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" podUID="bdcd04f7-09fa-4b1b-8b99-3de61a28a337" Jan 30 14:03:18 crc kubenswrapper[4793]: E0130 14:03:18.007348 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" podUID="ce9be14f-8255-421e-91b4-a30fc5482ff4" Jan 30 14:03:18 crc kubenswrapper[4793]: E0130 14:03:18.007540 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" podUID="bdcd04f7-09fa-4b1b-8b99-3de61a28a337" Jan 30 14:03:18 crc kubenswrapper[4793]: E0130 14:03:18.405803 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 30 14:03:18 crc kubenswrapper[4793]: E0130 14:03:18.405968 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zntmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-vtx9d_openstack-operators(31ca6ac1-d2da-4325-baa4-e18fc3514721): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:18 crc kubenswrapper[4793]: E0130 14:03:18.407173 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" podUID="31ca6ac1-d2da-4325-baa4-e18fc3514721" Jan 30 14:03:19 crc kubenswrapper[4793]: E0130 14:03:19.013370 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" podUID="31ca6ac1-d2da-4325-baa4-e18fc3514721" Jan 30 14:03:24 crc kubenswrapper[4793]: I0130 14:03:24.419967 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:24 crc kubenswrapper[4793]: I0130 14:03:24.426425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:24 crc kubenswrapper[4793]: I0130 14:03:24.541257 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ct9pn" Jan 30 14:03:24 crc kubenswrapper[4793]: I0130 14:03:24.550537 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:24 crc kubenswrapper[4793]: I0130 14:03:24.926501 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:03:24 crc kubenswrapper[4793]: I0130 14:03:24.943199 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:03:25 crc kubenswrapper[4793]: I0130 14:03:25.142610 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-spknc" Jan 30 14:03:25 crc kubenswrapper[4793]: I0130 14:03:25.151531 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:03:25 crc kubenswrapper[4793]: I0130 14:03:25.433442 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:25 crc kubenswrapper[4793]: I0130 14:03:25.437379 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:25 crc kubenswrapper[4793]: I0130 14:03:25.682025 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-95jx4" Jan 30 14:03:25 crc kubenswrapper[4793]: I0130 14:03:25.690717 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:28 crc kubenswrapper[4793]: I0130 14:03:28.422881 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446"] Jan 30 14:03:28 crc kubenswrapper[4793]: I0130 14:03:28.542590 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-khfs7"] Jan 30 14:03:28 crc kubenswrapper[4793]: W0130 14:03:28.578280 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9854850_e645_4364_a471_bef994f8536c.slice/crio-b4a19cadd40eb82e5a0bd838b68df7d7a74c2b6eca30738e588accba5dbfe4dc WatchSource:0}: Error finding container b4a19cadd40eb82e5a0bd838b68df7d7a74c2b6eca30738e588accba5dbfe4dc: Status 404 returned error can't find the container with id b4a19cadd40eb82e5a0bd838b68df7d7a74c2b6eca30738e588accba5dbfe4dc Jan 30 14:03:28 crc kubenswrapper[4793]: I0130 14:03:28.670400 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs"] Jan 30 14:03:28 crc kubenswrapper[4793]: W0130 14:03:28.766849 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode446e97c_6e9f_4dc2_b5fd_fb63451fd326.slice/crio-bae18ca5aa3b83a765daaac6a1480da665ff7a0367f0f791d1d2547b42a5e94f WatchSource:0}: Error finding container bae18ca5aa3b83a765daaac6a1480da665ff7a0367f0f791d1d2547b42a5e94f: Status 404 returned error can't find the container with id bae18ca5aa3b83a765daaac6a1480da665ff7a0367f0f791d1d2547b42a5e94f Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.112089 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" event={"ID":"710c57e4-a09e-4db1-a03b-13db05085d41","Type":"ContainerStarted","Data":"6d08b8f8d51f12a15ce91448e8d9f2a4814c5e254c97b37b448a077769d1a560"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.112220 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.114169 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" event={"ID":"6f991e04-2db3-4b32-bc83-8bbce4ce7a08","Type":"ContainerStarted","Data":"5d58d9cb51b15256753293ae92c1997066479a155769973e25ce2cebf51cc9d1"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.114305 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.118078 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" event={"ID":"e446e97c-6e9f-4dc2-b5fd-fb63451fd326","Type":"ContainerStarted","Data":"bae18ca5aa3b83a765daaac6a1480da665ff7a0367f0f791d1d2547b42a5e94f"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.123693 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" event={"ID":"2aae677d-830b-44b8-a792-3d0b527aee89","Type":"ContainerStarted","Data":"293dcbb0b62f2f73a14860453e3edc835f536be4bed5bf16cff006627cc9c8b3"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.125409 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" event={"ID":"7c34e714-0f18-4e41-ab9c-1dfe4859e644","Type":"ContainerStarted","Data":"4b67acb08e34346b47114952d4c9d43251b624fa74d9feed65156034c775e72f"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.125976 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.129614 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" event={"ID":"6231ed92-57a8-4c48-9c75-e916940b22ea","Type":"ContainerStarted","Data":"1cd04c4391e1aa64f2f8d19c195ecc4ea1893b517242f6600a0448557e5b3aef"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.129961 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.142331 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" event={"ID":"53576ec8-2f6d-4781-8906-726529cc6049","Type":"ContainerStarted","Data":"4498d2a99f62f450fb0ee6f1eeb7e64c106e8ce8c79acd314b0b7fe2c691718f"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.142419 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.147597 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" event={"ID":"6b21b0ca-d506-4b1b-b6e1-06e2a96ae033","Type":"ContainerStarted","Data":"a1e488365e9baeba1abff0c1b1ae3300c6079d75f704730ad6b738a785a519bc"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.147973 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.154082 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" event={"ID":"ec981da4-a3ba-4e4e-a0eb-2168ab79fe77","Type":"ContainerStarted","Data":"3f032202705eb4d294a11cd1aaa16cacaf5ea769d8ca352c5ded6dbdd7b47465"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.154175 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.165619 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" podStartSLOduration=12.097361437 podStartE2EDuration="37.165601076s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:53.865164979 +0000 UTC m=+1184.566513470" lastFinishedPulling="2026-01-30 14:03:18.933404598 +0000 UTC m=+1209.634753109" observedRunningTime="2026-01-30 14:03:29.157327149 +0000 UTC m=+1219.858675650" watchObservedRunningTime="2026-01-30 14:03:29.165601076 +0000 UTC m=+1219.866949567" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.165650 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" event={"ID":"8d24cd33-2902-424a-8ffc-76b1e4c2f482","Type":"ContainerStarted","Data":"7a4840128de67007bc3089340f7bda4d74cb43411b5799584659144d01f54d2d"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.166521 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.184042 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"f37b4adcd989135b3a0199183c5b09641f48fc83f250e8154636cac5c1ad21e6"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.198903 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" event={"ID":"02b8e60c-3514-4d72-bde6-5af374a926b1","Type":"ContainerStarted","Data":"322e117348d537a97afb0fe3e60f32a7b2ddc9b3913e2e54e9a4fcb830fd8e87"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.203370 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.207481 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" podStartSLOduration=2.940212422 podStartE2EDuration="36.207463599s" podCreationTimestamp="2026-01-30 14:02:53 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.719916503 +0000 UTC m=+1185.421264984" lastFinishedPulling="2026-01-30 14:03:27.98716767 +0000 UTC m=+1218.688516161" observedRunningTime="2026-01-30 14:03:29.190524363 +0000 UTC m=+1219.891872874" watchObservedRunningTime="2026-01-30 14:03:29.207463599 +0000 UTC m=+1219.908812090" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.227345 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" event={"ID":"5e215cef-de14-424d-9028-a48bad979192","Type":"ContainerStarted","Data":"20398ace5d623f7d4eb3a8e0b37021d7885d43d4210688d410a6a7ae44ebd035"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.227982 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.228901 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" event={"ID":"e9854850-e645-4364-a471-bef994f8536c","Type":"ContainerStarted","Data":"b4a19cadd40eb82e5a0bd838b68df7d7a74c2b6eca30738e588accba5dbfe4dc"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.234345 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" event={"ID":"3eb94c51-d506-4273-898b-dba537cabea6","Type":"ContainerStarted","Data":"f1bfeef5977d2bf323ff2e676f330bfe179b896f579fdc70d159507c0d75fa2c"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.235036 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.247089 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" podStartSLOduration=3.863579387 podStartE2EDuration="37.247073008s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.58535947 +0000 UTC m=+1185.286707961" lastFinishedPulling="2026-01-30 14:03:27.968853091 +0000 UTC m=+1218.670201582" observedRunningTime="2026-01-30 14:03:29.243518313 +0000 UTC m=+1219.944866794" watchObservedRunningTime="2026-01-30 14:03:29.247073008 +0000 UTC m=+1219.948421489" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.250362 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" event={"ID":"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642","Type":"ContainerStarted","Data":"5b9a9eec655e99bf4a5a92b43436e7d40ce0b2fd269fc5e49ce02f9134364010"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.280699 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" podStartSLOduration=3.912459948 podStartE2EDuration="37.280675313s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.588188528 +0000 UTC m=+1185.289537019" lastFinishedPulling="2026-01-30 14:03:27.956403893 +0000 UTC m=+1218.657752384" observedRunningTime="2026-01-30 14:03:29.267592199 +0000 UTC m=+1219.968940690" watchObservedRunningTime="2026-01-30 14:03:29.280675313 +0000 UTC m=+1219.982023804" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.318882 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" podStartSLOduration=13.053739985 podStartE2EDuration="37.318866028s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.669990547 +0000 UTC m=+1185.371339038" lastFinishedPulling="2026-01-30 14:03:18.93511659 +0000 UTC m=+1209.636465081" observedRunningTime="2026-01-30 14:03:29.314398491 +0000 UTC m=+1220.015746982" watchObservedRunningTime="2026-01-30 14:03:29.318866028 +0000 UTC m=+1220.020214519" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.347408 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" podStartSLOduration=12.294179871 podStartE2EDuration="37.34738931s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:53.878778435 +0000 UTC m=+1184.580126926" lastFinishedPulling="2026-01-30 14:03:18.931987864 +0000 UTC m=+1209.633336365" observedRunningTime="2026-01-30 14:03:29.346571661 +0000 UTC m=+1220.047920162" watchObservedRunningTime="2026-01-30 14:03:29.34738931 +0000 UTC m=+1220.048737801" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.370821 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" podStartSLOduration=4.266422646 podStartE2EDuration="37.370799601s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.758898007 +0000 UTC m=+1185.460246498" lastFinishedPulling="2026-01-30 14:03:27.863274962 +0000 UTC m=+1218.564623453" observedRunningTime="2026-01-30 14:03:29.368799834 +0000 UTC m=+1220.070148335" watchObservedRunningTime="2026-01-30 14:03:29.370799601 +0000 UTC m=+1220.072148092" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.396977 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" podStartSLOduration=4.235441384 podStartE2EDuration="37.396958058s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.720005075 +0000 UTC m=+1185.421353566" lastFinishedPulling="2026-01-30 14:03:27.881521749 +0000 UTC m=+1218.582870240" observedRunningTime="2026-01-30 14:03:29.391285442 +0000 UTC m=+1220.092633943" watchObservedRunningTime="2026-01-30 14:03:29.396958058 +0000 UTC m=+1220.098306549" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.427014 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" podStartSLOduration=4.384095064 podStartE2EDuration="37.426999187s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.830210465 +0000 UTC m=+1185.531558956" lastFinishedPulling="2026-01-30 14:03:27.873114548 +0000 UTC m=+1218.574463079" observedRunningTime="2026-01-30 14:03:29.425605504 +0000 UTC m=+1220.126953995" watchObservedRunningTime="2026-01-30 14:03:29.426999187 +0000 UTC m=+1220.128347678" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.469450 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" podStartSLOduration=4.379077993 podStartE2EDuration="37.469434624s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.762630915 +0000 UTC m=+1185.463979406" lastFinishedPulling="2026-01-30 14:03:27.852987546 +0000 UTC m=+1218.554336037" observedRunningTime="2026-01-30 14:03:29.467293503 +0000 UTC m=+1220.168641994" watchObservedRunningTime="2026-01-30 14:03:29.469434624 +0000 UTC m=+1220.170783115" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.557896 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" podStartSLOduration=13.207355674 podStartE2EDuration="37.557879143s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.584950949 +0000 UTC m=+1185.286299440" lastFinishedPulling="2026-01-30 14:03:18.935474398 +0000 UTC m=+1209.636822909" observedRunningTime="2026-01-30 14:03:29.537223838 +0000 UTC m=+1220.238572339" watchObservedRunningTime="2026-01-30 14:03:29.557879143 +0000 UTC m=+1220.259227634" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.656271 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" podStartSLOduration=4.393487299 podStartE2EDuration="37.656256119s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.694007602 +0000 UTC m=+1185.395356093" lastFinishedPulling="2026-01-30 14:03:27.956776412 +0000 UTC m=+1218.658124913" observedRunningTime="2026-01-30 14:03:29.621715171 +0000 UTC m=+1220.323063662" watchObservedRunningTime="2026-01-30 14:03:29.656256119 +0000 UTC m=+1220.357604610" Jan 30 14:03:30 crc kubenswrapper[4793]: I0130 14:03:30.256980 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" event={"ID":"1d859404-a29c-46c9-b66a-fed5ff0b13f0","Type":"ContainerStarted","Data":"ac3e067efe5c5da02b8fb97811c39920d5020a10f369eeb121a52a4572239128"} Jan 30 14:03:30 crc kubenswrapper[4793]: I0130 14:03:30.257963 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:03:30 crc kubenswrapper[4793]: I0130 14:03:30.259129 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" event={"ID":"f65e9448-ee4e-4f22-9bd7-ecf650cb36b5","Type":"ContainerStarted","Data":"b0d4b39b0f9cecd59cb0720b242941b5b172ab8b965299f045f58c98b9fe743e"} Jan 30 14:03:30 crc kubenswrapper[4793]: I0130 14:03:30.260569 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" event={"ID":"e9854850-e645-4364-a471-bef994f8536c","Type":"ContainerStarted","Data":"daa484d5ca82becb56802bcf64a76f541e659963aef9603cb9dac6d4d9db7698"} Jan 30 14:03:30 crc kubenswrapper[4793]: I0130 14:03:30.294319 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" podStartSLOduration=4.217579076 podStartE2EDuration="38.294301612s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.079826051 +0000 UTC m=+1184.781174542" lastFinishedPulling="2026-01-30 14:03:28.156548587 +0000 UTC m=+1218.857897078" observedRunningTime="2026-01-30 14:03:30.292477948 +0000 UTC m=+1220.993826439" watchObservedRunningTime="2026-01-30 14:03:30.294301612 +0000 UTC m=+1220.995650103" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.273134 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" event={"ID":"05415bc7-22dc-4b15-a047-6ed62755638d","Type":"ContainerStarted","Data":"92185437c26d53f7e6a0c77384511c8172fbbe61eb0097a8737beb22aac455a0"} Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.273503 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.275299 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" event={"ID":"8835e5d9-c37d-4744-95cb-c56c10a58647","Type":"ContainerStarted","Data":"ce279e3fb363f3026da51a6ba412e86078297a583fc64a03f049c53e8f30d9e2"} Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.275325 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.275972 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.276390 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.290600 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" podStartSLOduration=6.027461367 podStartE2EDuration="39.290580945s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.706242675 +0000 UTC m=+1185.407591166" lastFinishedPulling="2026-01-30 14:03:27.969362253 +0000 UTC m=+1218.670710744" observedRunningTime="2026-01-30 14:03:31.289396147 +0000 UTC m=+1221.990744638" watchObservedRunningTime="2026-01-30 14:03:31.290580945 +0000 UTC m=+1221.991929436" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.342268 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" podStartSLOduration=38.342242243 podStartE2EDuration="38.342242243s" podCreationTimestamp="2026-01-30 14:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:03:31.335964363 +0000 UTC m=+1222.037312864" watchObservedRunningTime="2026-01-30 14:03:31.342242243 +0000 UTC m=+1222.043590734" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.365847 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" podStartSLOduration=6.166286772 podStartE2EDuration="39.365828358s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.768392333 +0000 UTC m=+1185.469740824" lastFinishedPulling="2026-01-30 14:03:27.967933919 +0000 UTC m=+1218.669282410" observedRunningTime="2026-01-30 14:03:31.361641717 +0000 UTC m=+1222.062990198" watchObservedRunningTime="2026-01-30 14:03:31.365828358 +0000 UTC m=+1222.067176839" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.381911 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" podStartSLOduration=5.292979943 podStartE2EDuration="39.381895852s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:53.879091732 +0000 UTC m=+1184.580440223" lastFinishedPulling="2026-01-30 14:03:27.968007641 +0000 UTC m=+1218.669356132" observedRunningTime="2026-01-30 14:03:31.379812423 +0000 UTC m=+1222.081160914" watchObservedRunningTime="2026-01-30 14:03:31.381895852 +0000 UTC m=+1222.083244343" Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.283113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" event={"ID":"fa88d14c-0581-439c-9da1-f1123e41a65a","Type":"ContainerStarted","Data":"c5d8ae934a12d94beb722222e25e7718bca78238f52645c316e13a698f5d4cdb"} Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.283345 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.285547 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" event={"ID":"31ca6ac1-d2da-4325-baa4-e18fc3514721","Type":"ContainerStarted","Data":"131a8866ed381dbfacdfaee2b04e7ec69858d1bcb03c1fcf1fcd221966f702f5"} Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.285731 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.287716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" event={"ID":"ce9be14f-8255-421e-91b4-a30fc5482ff4","Type":"ContainerStarted","Data":"7fa904291b57f7502f2f4c58d66ceb0ac545053075d9a12e008340630dd1df71"} Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.288135 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.316278 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" podStartSLOduration=3.9605311690000002 podStartE2EDuration="40.316262543s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.664023334 +0000 UTC m=+1185.365371825" lastFinishedPulling="2026-01-30 14:03:31.019754708 +0000 UTC m=+1221.721103199" observedRunningTime="2026-01-30 14:03:32.300539226 +0000 UTC m=+1223.001887717" watchObservedRunningTime="2026-01-30 14:03:32.316262543 +0000 UTC m=+1223.017611034" Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.319189 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" podStartSLOduration=3.994090952 podStartE2EDuration="40.319173433s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.684948955 +0000 UTC m=+1185.386297446" lastFinishedPulling="2026-01-30 14:03:31.010031436 +0000 UTC m=+1221.711379927" observedRunningTime="2026-01-30 14:03:32.313486207 +0000 UTC m=+1223.014834708" watchObservedRunningTime="2026-01-30 14:03:32.319173433 +0000 UTC m=+1223.020521924" Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.419682 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" podStartSLOduration=3.80443487 podStartE2EDuration="40.41966551s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.585605986 +0000 UTC m=+1185.286954477" lastFinishedPulling="2026-01-30 14:03:31.200836626 +0000 UTC m=+1221.902185117" observedRunningTime="2026-01-30 14:03:32.336440306 +0000 UTC m=+1223.037788817" watchObservedRunningTime="2026-01-30 14:03:32.41966551 +0000 UTC m=+1223.121014001" Jan 30 14:03:33 crc kubenswrapper[4793]: I0130 14:03:33.312811 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:03:33 crc kubenswrapper[4793]: I0130 14:03:33.383644 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:03:33 crc kubenswrapper[4793]: I0130 14:03:33.414003 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:03:33 crc kubenswrapper[4793]: I0130 14:03:33.586816 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.301566 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" event={"ID":"bdcd04f7-09fa-4b1b-8b99-3de61a28a337","Type":"ContainerStarted","Data":"ae8edf990d7d598da8c49027fd0c9141b51d72ee143023d81d0e02cb56137363"} Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.302149 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.303194 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" event={"ID":"e446e97c-6e9f-4dc2-b5fd-fb63451fd326","Type":"ContainerStarted","Data":"4ff604a61e6addc102a2634f52536b0ff351a12eebda7c87d40b6e2cfbb568d5"} Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.303311 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.306679 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" event={"ID":"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642","Type":"ContainerStarted","Data":"dc003551a07fbe409c0c00ed6b2229783c5b4a02ab68f9eb38c1157364077279"} Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.306912 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.321634 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" podStartSLOduration=3.008003643 podStartE2EDuration="42.321617216s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.674222188 +0000 UTC m=+1185.375570679" lastFinishedPulling="2026-01-30 14:03:33.987835761 +0000 UTC m=+1224.689184252" observedRunningTime="2026-01-30 14:03:34.316686237 +0000 UTC m=+1225.018034738" watchObservedRunningTime="2026-01-30 14:03:34.321617216 +0000 UTC m=+1225.022965707" Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.350766 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" podStartSLOduration=37.142436782 podStartE2EDuration="42.350751714s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:03:28.778167377 +0000 UTC m=+1219.479515868" lastFinishedPulling="2026-01-30 14:03:33.986482289 +0000 UTC m=+1224.687830800" observedRunningTime="2026-01-30 14:03:34.345789035 +0000 UTC m=+1225.047137526" watchObservedRunningTime="2026-01-30 14:03:34.350751714 +0000 UTC m=+1225.052100205" Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.373186 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" podStartSLOduration=37.001625258 podStartE2EDuration="42.373170941s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:03:28.615070729 +0000 UTC m=+1219.316419220" lastFinishedPulling="2026-01-30 14:03:33.986616392 +0000 UTC m=+1224.687964903" observedRunningTime="2026-01-30 14:03:34.367542146 +0000 UTC m=+1225.068890637" watchObservedRunningTime="2026-01-30 14:03:34.373170941 +0000 UTC m=+1225.074519432" Jan 30 14:03:35 crc kubenswrapper[4793]: I0130 14:03:35.697248 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.558569 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.581719 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.632674 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.639754 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.712492 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.972926 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.993568 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.063258 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.110621 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.231272 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.231547 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.233444 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.334695 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.394968 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.695642 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:03:44 crc kubenswrapper[4793]: I0130 14:03:44.556797 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:45 crc kubenswrapper[4793]: I0130 14:03:45.159430 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.432910 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tngjn"] Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.434391 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.439942 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-nksbk" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.440221 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.440374 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.440481 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.444143 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tngjn"] Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.508587 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qtp9b"] Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.509781 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.513587 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.514227 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278cb\" (UniqueName: \"kubernetes.io/projected/ea64ca1b-5302-40cc-9918-810b75c36240-kube-api-access-278cb\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.514293 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-config\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.514315 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xvlt\" (UniqueName: \"kubernetes.io/projected/a6047db8-60b6-4b1d-94d0-9934475fb39e-kube-api-access-8xvlt\") pod \"dnsmasq-dns-675f4bcbfc-tngjn\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.514403 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.514440 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6047db8-60b6-4b1d-94d0-9934475fb39e-config\") pod \"dnsmasq-dns-675f4bcbfc-tngjn\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.523758 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qtp9b"] Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.615946 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-278cb\" (UniqueName: \"kubernetes.io/projected/ea64ca1b-5302-40cc-9918-810b75c36240-kube-api-access-278cb\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616002 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-config\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616026 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xvlt\" (UniqueName: \"kubernetes.io/projected/a6047db8-60b6-4b1d-94d0-9934475fb39e-kube-api-access-8xvlt\") pod \"dnsmasq-dns-675f4bcbfc-tngjn\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616097 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616132 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6047db8-60b6-4b1d-94d0-9934475fb39e-config\") pod \"dnsmasq-dns-675f4bcbfc-tngjn\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616879 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-config\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616963 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6047db8-60b6-4b1d-94d0-9934475fb39e-config\") pod \"dnsmasq-dns-675f4bcbfc-tngjn\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616997 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.637353 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-278cb\" (UniqueName: \"kubernetes.io/projected/ea64ca1b-5302-40cc-9918-810b75c36240-kube-api-access-278cb\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.640888 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xvlt\" (UniqueName: \"kubernetes.io/projected/a6047db8-60b6-4b1d-94d0-9934475fb39e-kube-api-access-8xvlt\") pod \"dnsmasq-dns-675f4bcbfc-tngjn\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.756580 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.823420 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:02 crc kubenswrapper[4793]: I0130 14:04:02.097308 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qtp9b"] Jan 30 14:04:02 crc kubenswrapper[4793]: I0130 14:04:02.102631 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:04:02 crc kubenswrapper[4793]: I0130 14:04:02.189021 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tngjn"] Jan 30 14:04:02 crc kubenswrapper[4793]: W0130 14:04:02.191486 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6047db8_60b6_4b1d_94d0_9934475fb39e.slice/crio-0e74e31437b5ab3a1ef1d51edaf0ec5456ff4ca346069331e5b2b21dd1a4df28 WatchSource:0}: Error finding container 0e74e31437b5ab3a1ef1d51edaf0ec5456ff4ca346069331e5b2b21dd1a4df28: Status 404 returned error can't find the container with id 0e74e31437b5ab3a1ef1d51edaf0ec5456ff4ca346069331e5b2b21dd1a4df28 Jan 30 14:04:02 crc kubenswrapper[4793]: I0130 14:04:02.518828 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" event={"ID":"a6047db8-60b6-4b1d-94d0-9934475fb39e","Type":"ContainerStarted","Data":"0e74e31437b5ab3a1ef1d51edaf0ec5456ff4ca346069331e5b2b21dd1a4df28"} Jan 30 14:04:02 crc kubenswrapper[4793]: I0130 14:04:02.521411 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" event={"ID":"ea64ca1b-5302-40cc-9918-810b75c36240","Type":"ContainerStarted","Data":"ee3c031683159179731efba2dde35050df6b60a59cdc2e43e0c06f26ed4f9d1f"} Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.180039 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tngjn"] Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.223041 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6twpw"] Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.224228 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.241038 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6twpw"] Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.379422 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.379484 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-config\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.379506 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw6fw\" (UniqueName: \"kubernetes.io/projected/57f8cfde-399c-43ec-bf72-e96f12a05ae2-kube-api-access-mw6fw\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.480543 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.480606 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-config\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.480631 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw6fw\" (UniqueName: \"kubernetes.io/projected/57f8cfde-399c-43ec-bf72-e96f12a05ae2-kube-api-access-mw6fw\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.481986 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.482504 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-config\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.552419 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw6fw\" (UniqueName: \"kubernetes.io/projected/57f8cfde-399c-43ec-bf72-e96f12a05ae2-kube-api-access-mw6fw\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.606984 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qtp9b"] Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.630788 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vfvss"] Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.631944 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.659472 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vfvss"] Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.786210 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lhk6\" (UniqueName: \"kubernetes.io/projected/4ebaeca8-f301-4d75-8691-98415ddcf7e2-kube-api-access-7lhk6\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.786289 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.786364 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-config\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.841612 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.893495 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-config\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.893793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lhk6\" (UniqueName: \"kubernetes.io/projected/4ebaeca8-f301-4d75-8691-98415ddcf7e2-kube-api-access-7lhk6\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.893841 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.894927 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.895032 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-config\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.925169 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lhk6\" (UniqueName: \"kubernetes.io/projected/4ebaeca8-f301-4d75-8691-98415ddcf7e2-kube-api-access-7lhk6\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.964634 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.348320 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6twpw"] Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.443241 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.444348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.449918 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.455954 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.457099 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.457246 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.463397 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.471963 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.472945 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.472950 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4mm4r" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.571377 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vfvss"] Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576016 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rck4w\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-kube-api-access-rck4w\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576195 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ab4371b-53c0-41a1-9561-0c02f936c7a7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576285 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576369 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576455 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ab4371b-53c0-41a1-9561-0c02f936c7a7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576524 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576610 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576686 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576787 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576887 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576981 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.611728 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" event={"ID":"57f8cfde-399c-43ec-bf72-e96f12a05ae2","Type":"ContainerStarted","Data":"b6d25f5f6c7c96e5312511cdf0154bdf3db1eff34982a8bfa221c443bb69496c"} Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701168 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701222 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701249 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701285 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rck4w\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-kube-api-access-rck4w\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701308 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ab4371b-53c0-41a1-9561-0c02f936c7a7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701331 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701360 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701386 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ab4371b-53c0-41a1-9561-0c02f936c7a7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701405 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701439 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701467 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.702304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.702820 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.703657 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.716257 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.716598 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.730635 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ab4371b-53c0-41a1-9561-0c02f936c7a7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.731307 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.741648 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ab4371b-53c0-41a1-9561-0c02f936c7a7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.779465 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.807956 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.812910 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rck4w\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-kube-api-access-rck4w\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.820936 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.822276 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.828916 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.828933 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.829619 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.829797 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.830412 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.830648 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dkqxx" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.830758 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.833697 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.896912 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916766 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916804 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f59v5\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-kube-api-access-f59v5\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916850 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916877 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a4cd276-23a5-4acb-bb1b-41470a11c945-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916895 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a4cd276-23a5-4acb-bb1b-41470a11c945-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916929 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916950 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916969 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916999 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.917014 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018308 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a4cd276-23a5-4acb-bb1b-41470a11c945-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018518 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018538 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018561 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018579 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018611 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018626 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018655 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018669 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f59v5\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-kube-api-access-f59v5\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018733 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a4cd276-23a5-4acb-bb1b-41470a11c945-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.019932 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.022361 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.030750 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.032158 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.038320 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.050780 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a4cd276-23a5-4acb-bb1b-41470a11c945-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.050892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a4cd276-23a5-4acb-bb1b-41470a11c945-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.052394 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.052910 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f59v5\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-kube-api-access-f59v5\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.063369 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.052029 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.076677 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.088248 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.214313 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.650332 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" event={"ID":"4ebaeca8-f301-4d75-8691-98415ddcf7e2","Type":"ContainerStarted","Data":"a95902e824bd19a3e1746ccd97d0b63e3b3629d4c2754b4eeaeedb289cd0a81a"} Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.915963 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.981489 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.032233 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.033351 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.039697 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-lmpfw" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.039925 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.040134 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.040550 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.042225 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.065073 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139115 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139192 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-config-data-default\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139272 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139332 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9gw4\" (UniqueName: \"kubernetes.io/projected/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-kube-api-access-p9gw4\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139362 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139417 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139501 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139538 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-kolla-config\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246323 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-kolla-config\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246378 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246410 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-config-data-default\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246477 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246520 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9gw4\" (UniqueName: \"kubernetes.io/projected/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-kube-api-access-p9gw4\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246645 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246671 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246712 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.248465 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.248712 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-kolla-config\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.248931 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.249131 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-config-data-default\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.250763 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.272529 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9gw4\" (UniqueName: \"kubernetes.io/projected/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-kube-api-access-p9gw4\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.272913 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.283621 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.290757 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.466584 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.682786 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ab4371b-53c0-41a1-9561-0c02f936c7a7","Type":"ContainerStarted","Data":"0efe8f891a233c8e5ac4fe6bb1b425a66ddbc8f34f8412134d77a42240eb7c39"} Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.700395 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a4cd276-23a5-4acb-bb1b-41470a11c945","Type":"ContainerStarted","Data":"49420acdae0565905cd8f73dba3384bd4f0c8ed41985335ead11f16b3b125159"} Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.040895 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.100291 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.141145 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.147857 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-hb24d" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.147918 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.148069 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.149184 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.150721 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.274812 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.274884 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.274918 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e0025f-6abc-4554-b7a0-c132607aec86-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.274945 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.274971 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.275026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/41e0025f-6abc-4554-b7a0-c132607aec86-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.275065 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6brxc\" (UniqueName: \"kubernetes.io/projected/41e0025f-6abc-4554-b7a0-c132607aec86-kube-api-access-6brxc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.275102 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e0025f-6abc-4554-b7a0-c132607aec86-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.377834 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/41e0025f-6abc-4554-b7a0-c132607aec86-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.377885 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6brxc\" (UniqueName: \"kubernetes.io/projected/41e0025f-6abc-4554-b7a0-c132607aec86-kube-api-access-6brxc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.377923 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e0025f-6abc-4554-b7a0-c132607aec86-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.377950 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.377974 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.378000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e0025f-6abc-4554-b7a0-c132607aec86-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.378021 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.378042 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.378902 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.379145 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.379332 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/41e0025f-6abc-4554-b7a0-c132607aec86-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.379761 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.382527 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.398456 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e0025f-6abc-4554-b7a0-c132607aec86-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.411966 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e0025f-6abc-4554-b7a0-c132607aec86-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.414374 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6brxc\" (UniqueName: \"kubernetes.io/projected/41e0025f-6abc-4554-b7a0-c132607aec86-kube-api-access-6brxc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.443598 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.528665 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.627192 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.628107 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.631314 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.631528 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kn5v2" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.631661 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.653893 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.684275 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89e99d15-97ad-4ac5-ba68-82ef88460222-config-data\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.684324 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e99d15-97ad-4ac5-ba68-82ef88460222-combined-ca-bundle\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.684353 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp8t4\" (UniqueName: \"kubernetes.io/projected/89e99d15-97ad-4ac5-ba68-82ef88460222-kube-api-access-qp8t4\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.684399 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e99d15-97ad-4ac5-ba68-82ef88460222-memcached-tls-certs\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.684426 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89e99d15-97ad-4ac5-ba68-82ef88460222-kolla-config\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.788267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e99d15-97ad-4ac5-ba68-82ef88460222-combined-ca-bundle\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.788316 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp8t4\" (UniqueName: \"kubernetes.io/projected/89e99d15-97ad-4ac5-ba68-82ef88460222-kube-api-access-qp8t4\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.788369 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e99d15-97ad-4ac5-ba68-82ef88460222-memcached-tls-certs\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.788398 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89e99d15-97ad-4ac5-ba68-82ef88460222-kolla-config\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.788459 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89e99d15-97ad-4ac5-ba68-82ef88460222-config-data\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.791832 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89e99d15-97ad-4ac5-ba68-82ef88460222-kolla-config\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.801696 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89e99d15-97ad-4ac5-ba68-82ef88460222-config-data\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.806098 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f","Type":"ContainerStarted","Data":"06e458f281786a13b324b174ac35ae3b7301d1d2d20e5f80ac0fd053e95b543a"} Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.812715 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp8t4\" (UniqueName: \"kubernetes.io/projected/89e99d15-97ad-4ac5-ba68-82ef88460222-kube-api-access-qp8t4\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.812907 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e99d15-97ad-4ac5-ba68-82ef88460222-combined-ca-bundle\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.813530 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e99d15-97ad-4ac5-ba68-82ef88460222-memcached-tls-certs\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.958859 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 14:04:09 crc kubenswrapper[4793]: I0130 14:04:09.488246 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 14:04:09 crc kubenswrapper[4793]: I0130 14:04:09.832143 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 14:04:09 crc kubenswrapper[4793]: I0130 14:04:09.872410 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"41e0025f-6abc-4554-b7a0-c132607aec86","Type":"ContainerStarted","Data":"a2416f0e9999abe6cf0b1693538e57bb731071a12bc060d17ec264849e142bf1"} Jan 30 14:04:10 crc kubenswrapper[4793]: I0130 14:04:10.919718 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"89e99d15-97ad-4ac5-ba68-82ef88460222","Type":"ContainerStarted","Data":"5dcc56db407340685fbbe2c142bb6566727831beca81ef596fce19fbee41c708"} Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.036610 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.037766 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.042721 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-dz4v6" Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.082579 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.185343 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g555f\" (UniqueName: \"kubernetes.io/projected/e61af9bc-c79d-4e81-a602-37afbdc017a5-kube-api-access-g555f\") pod \"kube-state-metrics-0\" (UID: \"e61af9bc-c79d-4e81-a602-37afbdc017a5\") " pod="openstack/kube-state-metrics-0" Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.289454 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g555f\" (UniqueName: \"kubernetes.io/projected/e61af9bc-c79d-4e81-a602-37afbdc017a5-kube-api-access-g555f\") pod \"kube-state-metrics-0\" (UID: \"e61af9bc-c79d-4e81-a602-37afbdc017a5\") " pod="openstack/kube-state-metrics-0" Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.319852 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g555f\" (UniqueName: \"kubernetes.io/projected/e61af9bc-c79d-4e81-a602-37afbdc017a5-kube-api-access-g555f\") pod \"kube-state-metrics-0\" (UID: \"e61af9bc-c79d-4e81-a602-37afbdc017a5\") " pod="openstack/kube-state-metrics-0" Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.381386 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.993994 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:04:13 crc kubenswrapper[4793]: I0130 14:04:13.000698 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e61af9bc-c79d-4e81-a602-37afbdc017a5","Type":"ContainerStarted","Data":"71bf22217d9be03e116230139d0442df663407d89a0d201f8b40fe58cd8686cf"} Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.003039 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-45fd5"] Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.004378 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.012669 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4kssx" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.012767 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.018935 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-45fd5"] Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.027867 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.065314 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.082510 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.082617 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.096129 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.096265 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-9s4dn" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.096320 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.096463 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.096610 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176447 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-log-ovn\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176554 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-run-ovn\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176625 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230700ff-5087-4d0d-9d93-90b597d2ef72-combined-ca-bundle\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176674 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-run\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176724 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230700ff-5087-4d0d-9d93-90b597d2ef72-scripts\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176779 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm8kh\" (UniqueName: \"kubernetes.io/projected/230700ff-5087-4d0d-9d93-90b597d2ef72-kube-api-access-qm8kh\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176807 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/230700ff-5087-4d0d-9d93-90b597d2ef72-ovn-controller-tls-certs\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.203184 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-56x4d"] Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.205774 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278404 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-run-ovn\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278471 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bfa8998b-ee3a-4aea-80e8-c59620a5308a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278499 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230700ff-5087-4d0d-9d93-90b597d2ef72-combined-ca-bundle\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278546 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-run\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278575 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278597 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ls5x\" (UniqueName: \"kubernetes.io/projected/bfa8998b-ee3a-4aea-80e8-c59620a5308a-kube-api-access-7ls5x\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278618 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230700ff-5087-4d0d-9d93-90b597d2ef72-scripts\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278641 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa8998b-ee3a-4aea-80e8-c59620a5308a-config\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278659 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278690 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm8kh\" (UniqueName: \"kubernetes.io/projected/230700ff-5087-4d0d-9d93-90b597d2ef72-kube-api-access-qm8kh\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278710 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/230700ff-5087-4d0d-9d93-90b597d2ef72-ovn-controller-tls-certs\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278744 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-log-ovn\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278766 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa8998b-ee3a-4aea-80e8-c59620a5308a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278794 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.279408 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-run-ovn\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.281486 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230700ff-5087-4d0d-9d93-90b597d2ef72-scripts\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.281491 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-run\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.349097 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-log-ovn\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.349879 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/230700ff-5087-4d0d-9d93-90b597d2ef72-ovn-controller-tls-certs\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.350603 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230700ff-5087-4d0d-9d93-90b597d2ef72-combined-ca-bundle\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.350648 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-56x4d"] Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.369671 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm8kh\" (UniqueName: \"kubernetes.io/projected/230700ff-5087-4d0d-9d93-90b597d2ef72-kube-api-access-qm8kh\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bfa8998b-ee3a-4aea-80e8-c59620a5308a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381167 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381195 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381212 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ls5x\" (UniqueName: \"kubernetes.io/projected/bfa8998b-ee3a-4aea-80e8-c59620a5308a-kube-api-access-7ls5x\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381247 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa8998b-ee3a-4aea-80e8-c59620a5308a-config\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381269 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381296 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntd56\" (UniqueName: \"kubernetes.io/projected/f6d71a04-6d3d-4444-9963-950135c3d6da-kube-api-access-ntd56\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381348 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6d71a04-6d3d-4444-9963-950135c3d6da-scripts\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381372 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-lib\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381397 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-run\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381416 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-log\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381442 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa8998b-ee3a-4aea-80e8-c59620a5308a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381475 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381498 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-etc-ovs\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.382019 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bfa8998b-ee3a-4aea-80e8-c59620a5308a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.382426 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.382903 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa8998b-ee3a-4aea-80e8-c59620a5308a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.383993 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa8998b-ee3a-4aea-80e8-c59620a5308a-config\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.424766 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.428932 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.429508 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.440147 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ls5x\" (UniqueName: \"kubernetes.io/projected/bfa8998b-ee3a-4aea-80e8-c59620a5308a-kube-api-access-7ls5x\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.467289 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483356 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-etc-ovs\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483460 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntd56\" (UniqueName: \"kubernetes.io/projected/f6d71a04-6d3d-4444-9963-950135c3d6da-kube-api-access-ntd56\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483546 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6d71a04-6d3d-4444-9963-950135c3d6da-scripts\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483571 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-lib\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483591 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-run\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483606 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-log\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483944 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-log\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.485094 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-etc-ovs\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.485251 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-lib\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.485318 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-run\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.487925 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.499986 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6d71a04-6d3d-4444-9963-950135c3d6da-scripts\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.522793 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntd56\" (UniqueName: \"kubernetes.io/projected/f6d71a04-6d3d-4444-9963-950135c3d6da-kube-api-access-ntd56\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.552448 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.648568 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-45fd5" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.841957 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.851767 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.854850 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.855028 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9qtfg" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.855198 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.855736 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.863273 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982194 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982263 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/285be7d6-1f03-43af-8087-46ba257183ec-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982321 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285be7d6-1f03-43af-8087-46ba257183ec-config\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982349 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982380 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v45t5\" (UniqueName: \"kubernetes.io/projected/285be7d6-1f03-43af-8087-46ba257183ec-kube-api-access-v45t5\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982502 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982569 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/285be7d6-1f03-43af-8087-46ba257183ec-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083718 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/285be7d6-1f03-43af-8087-46ba257183ec-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083818 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/285be7d6-1f03-43af-8087-46ba257183ec-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083856 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285be7d6-1f03-43af-8087-46ba257183ec-config\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083873 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083893 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v45t5\" (UniqueName: \"kubernetes.io/projected/285be7d6-1f03-43af-8087-46ba257183ec-kube-api-access-v45t5\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083923 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083945 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.084259 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.086285 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/285be7d6-1f03-43af-8087-46ba257183ec-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.086327 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/285be7d6-1f03-43af-8087-46ba257183ec-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.087934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285be7d6-1f03-43af-8087-46ba257183ec-config\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.099007 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.099375 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.100215 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.105149 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v45t5\" (UniqueName: \"kubernetes.io/projected/285be7d6-1f03-43af-8087-46ba257183ec-kube-api-access-v45t5\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.105896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.195458 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:29 crc kubenswrapper[4793]: E0130 14:04:29.752533 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 14:04:29 crc kubenswrapper[4793]: E0130 14:04:29.753286 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rck4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(0ab4371b-53c0-41a1-9561-0c02f936c7a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:29 crc kubenswrapper[4793]: E0130 14:04:29.754438 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" Jan 30 14:04:29 crc kubenswrapper[4793]: E0130 14:04:29.767115 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 14:04:29 crc kubenswrapper[4793]: E0130 14:04:29.767716 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f59v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(5a4cd276-23a5-4acb-bb1b-41470a11c945): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:29 crc kubenswrapper[4793]: E0130 14:04:29.768965 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" Jan 30 14:04:30 crc kubenswrapper[4793]: E0130 14:04:30.186377 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" Jan 30 14:04:30 crc kubenswrapper[4793]: E0130 14:04:30.187319 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" Jan 30 14:04:32 crc kubenswrapper[4793]: E0130 14:04:32.882463 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 30 14:04:32 crc kubenswrapper[4793]: E0130 14:04:32.882656 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6brxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(41e0025f-6abc-4554-b7a0-c132607aec86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:32 crc kubenswrapper[4793]: E0130 14:04:32.884192 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="41e0025f-6abc-4554-b7a0-c132607aec86" Jan 30 14:04:33 crc kubenswrapper[4793]: E0130 14:04:33.203192 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="41e0025f-6abc-4554-b7a0-c132607aec86" Jan 30 14:04:34 crc kubenswrapper[4793]: E0130 14:04:34.601888 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 30 14:04:34 crc kubenswrapper[4793]: E0130 14:04:34.602310 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nb9h78h5d4h96h679h56ch556hcbhdh6dh68fh585h577h68dhc5h5h5dch5dch84h545h664h5ffhcbh596h58bh5f5h8dh67dh5hbdh84h577q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qp8t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(89e99d15-97ad-4ac5-ba68-82ef88460222): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:34 crc kubenswrapper[4793]: E0130 14:04:34.604028 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="89e99d15-97ad-4ac5-ba68-82ef88460222" Jan 30 14:04:35 crc kubenswrapper[4793]: E0130 14:04:35.226424 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="89e99d15-97ad-4ac5-ba68-82ef88460222" Jan 30 14:04:37 crc kubenswrapper[4793]: E0130 14:04:37.551255 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 30 14:04:37 crc kubenswrapper[4793]: E0130 14:04:37.551992 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9gw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(f45b0069-4cb7-4dfd-ac2d-1473cacbde1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:37 crc kubenswrapper[4793]: E0130 14:04:37.553441 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" Jan 30 14:04:37 crc kubenswrapper[4793]: I0130 14:04:37.800517 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-45fd5"] Jan 30 14:04:38 crc kubenswrapper[4793]: E0130 14:04:38.252349 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" Jan 30 14:04:42 crc kubenswrapper[4793]: W0130 14:04:42.476870 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod230700ff_5087_4d0d_9d93_90b597d2ef72.slice/crio-0497e633d51d231326624a55b74dba39fa0af0181bfded4d7119186802db32a7 WatchSource:0}: Error finding container 0497e633d51d231326624a55b74dba39fa0af0181bfded4d7119186802db32a7: Status 404 returned error can't find the container with id 0497e633d51d231326624a55b74dba39fa0af0181bfded4d7119186802db32a7 Jan 30 14:04:42 crc kubenswrapper[4793]: I0130 14:04:42.985800 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 14:04:43 crc kubenswrapper[4793]: I0130 14:04:43.289219 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-45fd5" event={"ID":"230700ff-5087-4d0d-9d93-90b597d2ef72","Type":"ContainerStarted","Data":"0497e633d51d231326624a55b74dba39fa0af0181bfded4d7119186802db32a7"} Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.583715 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.584974 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw6fw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-6twpw_openstack(57f8cfde-399c-43ec-bf72-e96f12a05ae2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.586776 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.609244 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.609388 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-278cb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-qtp9b_openstack(ea64ca1b-5302-40cc-9918-810b75c36240): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.610669 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" podUID="ea64ca1b-5302-40cc-9918-810b75c36240" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.634116 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.634276 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8xvlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-tngjn_openstack(a6047db8-60b6-4b1d-94d0-9934475fb39e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.635469 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" podUID="a6047db8-60b6-4b1d-94d0-9934475fb39e" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.675829 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.676030 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lhk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-vfvss_openstack(4ebaeca8-f301-4d75-8691-98415ddcf7e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.677298 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" Jan 30 14:04:43 crc kubenswrapper[4793]: I0130 14:04:43.758781 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 14:04:44 crc kubenswrapper[4793]: I0130 14:04:44.069452 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-56x4d"] Jan 30 14:04:44 crc kubenswrapper[4793]: I0130 14:04:44.298427 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bfa8998b-ee3a-4aea-80e8-c59620a5308a","Type":"ContainerStarted","Data":"a578141da421138078dc94afb22e8ec18c67185d426c8e546c675b69f313a882"} Jan 30 14:04:44 crc kubenswrapper[4793]: I0130 14:04:44.300203 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"285be7d6-1f03-43af-8087-46ba257183ec","Type":"ContainerStarted","Data":"a6c864ea805244cc9f917b3520d929aaa74ad2b7a49a41c11a44442dc5a601c0"} Jan 30 14:04:44 crc kubenswrapper[4793]: E0130 14:04:44.301945 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" Jan 30 14:04:44 crc kubenswrapper[4793]: E0130 14:04:44.304654 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" Jan 30 14:04:44 crc kubenswrapper[4793]: W0130 14:04:44.631934 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6d71a04_6d3d_4444_9963_950135c3d6da.slice/crio-5b867f8c434aee6351f262d8f4a956b837d686a77bf5b0ec609636f858a04ea6 WatchSource:0}: Error finding container 5b867f8c434aee6351f262d8f4a956b837d686a77bf5b0ec609636f858a04ea6: Status 404 returned error can't find the container with id 5b867f8c434aee6351f262d8f4a956b837d686a77bf5b0ec609636f858a04ea6 Jan 30 14:04:44 crc kubenswrapper[4793]: E0130 14:04:44.696936 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 30 14:04:44 crc kubenswrapper[4793]: E0130 14:04:44.697103 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 30 14:04:44 crc kubenswrapper[4793]: E0130 14:04:44.697303 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g555f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(e61af9bc-c79d-4e81-a602-37afbdc017a5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 14:04:44 crc kubenswrapper[4793]: E0130 14:04:44.699392 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" Jan 30 14:04:44 crc kubenswrapper[4793]: I0130 14:04:44.881571 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:44 crc kubenswrapper[4793]: I0130 14:04:44.936568 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.037514 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-dns-svc\") pod \"ea64ca1b-5302-40cc-9918-810b75c36240\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.037572 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-config\") pod \"ea64ca1b-5302-40cc-9918-810b75c36240\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.037626 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xvlt\" (UniqueName: \"kubernetes.io/projected/a6047db8-60b6-4b1d-94d0-9934475fb39e-kube-api-access-8xvlt\") pod \"a6047db8-60b6-4b1d-94d0-9934475fb39e\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.037664 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6047db8-60b6-4b1d-94d0-9934475fb39e-config\") pod \"a6047db8-60b6-4b1d-94d0-9934475fb39e\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.037689 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-278cb\" (UniqueName: \"kubernetes.io/projected/ea64ca1b-5302-40cc-9918-810b75c36240-kube-api-access-278cb\") pod \"ea64ca1b-5302-40cc-9918-810b75c36240\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.038164 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-config" (OuterVolumeSpecName: "config") pod "ea64ca1b-5302-40cc-9918-810b75c36240" (UID: "ea64ca1b-5302-40cc-9918-810b75c36240"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.038315 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea64ca1b-5302-40cc-9918-810b75c36240" (UID: "ea64ca1b-5302-40cc-9918-810b75c36240"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.039081 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6047db8-60b6-4b1d-94d0-9934475fb39e-config" (OuterVolumeSpecName: "config") pod "a6047db8-60b6-4b1d-94d0-9934475fb39e" (UID: "a6047db8-60b6-4b1d-94d0-9934475fb39e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.042489 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6047db8-60b6-4b1d-94d0-9934475fb39e-kube-api-access-8xvlt" (OuterVolumeSpecName: "kube-api-access-8xvlt") pod "a6047db8-60b6-4b1d-94d0-9934475fb39e" (UID: "a6047db8-60b6-4b1d-94d0-9934475fb39e"). InnerVolumeSpecName "kube-api-access-8xvlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.043061 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea64ca1b-5302-40cc-9918-810b75c36240-kube-api-access-278cb" (OuterVolumeSpecName: "kube-api-access-278cb") pod "ea64ca1b-5302-40cc-9918-810b75c36240" (UID: "ea64ca1b-5302-40cc-9918-810b75c36240"). InnerVolumeSpecName "kube-api-access-278cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.139526 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.139557 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.139568 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xvlt\" (UniqueName: \"kubernetes.io/projected/a6047db8-60b6-4b1d-94d0-9934475fb39e-kube-api-access-8xvlt\") on node \"crc\" DevicePath \"\"" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.139580 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6047db8-60b6-4b1d-94d0-9934475fb39e-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.139593 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-278cb\" (UniqueName: \"kubernetes.io/projected/ea64ca1b-5302-40cc-9918-810b75c36240-kube-api-access-278cb\") on node \"crc\" DevicePath \"\"" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.309388 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" event={"ID":"ea64ca1b-5302-40cc-9918-810b75c36240","Type":"ContainerDied","Data":"ee3c031683159179731efba2dde35050df6b60a59cdc2e43e0c06f26ed4f9d1f"} Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.309466 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.313147 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-56x4d" event={"ID":"f6d71a04-6d3d-4444-9963-950135c3d6da","Type":"ContainerStarted","Data":"5b867f8c434aee6351f262d8f4a956b837d686a77bf5b0ec609636f858a04ea6"} Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.315802 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" event={"ID":"a6047db8-60b6-4b1d-94d0-9934475fb39e","Type":"ContainerDied","Data":"0e74e31437b5ab3a1ef1d51edaf0ec5456ff4ca346069331e5b2b21dd1a4df28"} Jan 30 14:04:45 crc kubenswrapper[4793]: E0130 14:04:45.316543 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.316693 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.406338 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qtp9b"] Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.417764 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qtp9b"] Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.439251 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tngjn"] Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.447561 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tngjn"] Jan 30 14:04:46 crc kubenswrapper[4793]: I0130 14:04:46.321739 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a4cd276-23a5-4acb-bb1b-41470a11c945","Type":"ContainerStarted","Data":"d616170562eeb4ba00ef47dc4bae339cb080a28d5310b1ec237e9ad217b38991"} Jan 30 14:04:46 crc kubenswrapper[4793]: I0130 14:04:46.326168 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ab4371b-53c0-41a1-9561-0c02f936c7a7","Type":"ContainerStarted","Data":"06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48"} Jan 30 14:04:46 crc kubenswrapper[4793]: I0130 14:04:46.410287 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6047db8-60b6-4b1d-94d0-9934475fb39e" path="/var/lib/kubelet/pods/a6047db8-60b6-4b1d-94d0-9934475fb39e/volumes" Jan 30 14:04:46 crc kubenswrapper[4793]: I0130 14:04:46.410759 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea64ca1b-5302-40cc-9918-810b75c36240" path="/var/lib/kubelet/pods/ea64ca1b-5302-40cc-9918-810b75c36240/volumes" Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.408410 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"285be7d6-1f03-43af-8087-46ba257183ec","Type":"ContainerStarted","Data":"5f977086a20135b5c73312cd73f299f0c72f0872684a6d3b87673481e31d8f46"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.409208 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-45fd5" Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.409243 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"41e0025f-6abc-4554-b7a0-c132607aec86","Type":"ContainerStarted","Data":"a5f690625509d9f182522efae60dbd8b14b995b3093c366d0783ec9f47faf44f"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.409274 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-45fd5" event={"ID":"230700ff-5087-4d0d-9d93-90b597d2ef72","Type":"ContainerStarted","Data":"5b237d565754ec86efd0a672aecff5cd47e2a2edf65044217fa18c12e2cddad3"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.409288 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"89e99d15-97ad-4ac5-ba68-82ef88460222","Type":"ContainerStarted","Data":"6c7459b57017b64fa7fafbd9f1661b0078e148ac66792474ac9fc92f81b472a4"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.409986 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.410265 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bfa8998b-ee3a-4aea-80e8-c59620a5308a","Type":"ContainerStarted","Data":"95a3843fa64746a2ae326f96cf6556335e8a8fc9fe27e573d8ff111ced9b3403"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.412722 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-56x4d" event={"ID":"f6d71a04-6d3d-4444-9963-950135c3d6da","Type":"ContainerDied","Data":"98df26c156510140f51b0afd7722ffaa1126f3e1b6a146ea7bd95ff308fac46b"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.413161 4793 generic.go:334] "Generic (PLEG): container finished" podID="f6d71a04-6d3d-4444-9963-950135c3d6da" containerID="98df26c156510140f51b0afd7722ffaa1126f3e1b6a146ea7bd95ff308fac46b" exitCode=0 Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.415097 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f","Type":"ContainerStarted","Data":"133d4bcbeb7456f153385eff906c7efb12649856c47bafc5796c8ad2d5657a75"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.430624 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-45fd5" podStartSLOduration=30.604106195 podStartE2EDuration="41.430603681s" podCreationTimestamp="2026-01-30 14:04:13 +0000 UTC" firstStartedPulling="2026-01-30 14:04:42.478785147 +0000 UTC m=+1293.180133638" lastFinishedPulling="2026-01-30 14:04:53.305282633 +0000 UTC m=+1304.006631124" observedRunningTime="2026-01-30 14:04:54.42159866 +0000 UTC m=+1305.122947161" watchObservedRunningTime="2026-01-30 14:04:54.430603681 +0000 UTC m=+1305.131952172" Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.519174 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.010303573 podStartE2EDuration="46.519154742s" podCreationTimestamp="2026-01-30 14:04:08 +0000 UTC" firstStartedPulling="2026-01-30 14:04:09.911986677 +0000 UTC m=+1260.613335168" lastFinishedPulling="2026-01-30 14:04:53.420837806 +0000 UTC m=+1304.122186337" observedRunningTime="2026-01-30 14:04:54.514537979 +0000 UTC m=+1305.215886480" watchObservedRunningTime="2026-01-30 14:04:54.519154742 +0000 UTC m=+1305.220503233" Jan 30 14:04:55 crc kubenswrapper[4793]: I0130 14:04:55.426012 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-56x4d" event={"ID":"f6d71a04-6d3d-4444-9963-950135c3d6da","Type":"ContainerStarted","Data":"aa17ab4cf043ac7bf510f1a779d7a49c0b8bc619c395d3dfa5231c885d485193"} Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.433352 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bfa8998b-ee3a-4aea-80e8-c59620a5308a","Type":"ContainerStarted","Data":"7373e9ef498cd121e57fc24eb191a80970b3c3bae2c9482b6bca66cad3fa8fdd"} Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.435833 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-56x4d" event={"ID":"f6d71a04-6d3d-4444-9963-950135c3d6da","Type":"ContainerStarted","Data":"bb31e04ec262f0558eb898cc652abac461a20ac4bc486d22c80fbbc39c3c7bdd"} Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.435999 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.436220 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.438325 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"285be7d6-1f03-43af-8087-46ba257183ec","Type":"ContainerStarted","Data":"92d9f11da992a79894aa252d4fbcd2a2ad7caedd58a70a7c719fcca59c378de2"} Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.440101 4793 generic.go:334] "Generic (PLEG): container finished" podID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerID="3f70174b11e96cdd2d573d9ee24e4219762e2a0529f8d646d037440b2831590b" exitCode=0 Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.440163 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" event={"ID":"4ebaeca8-f301-4d75-8691-98415ddcf7e2","Type":"ContainerDied","Data":"3f70174b11e96cdd2d573d9ee24e4219762e2a0529f8d646d037440b2831590b"} Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.441711 4793 generic.go:334] "Generic (PLEG): container finished" podID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerID="e55e6db12bc091de69952e0e4d9fe2c04ddaa0a5ca5e5c173912be87073539b1" exitCode=0 Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.441822 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" event={"ID":"57f8cfde-399c-43ec-bf72-e96f12a05ae2","Type":"ContainerDied","Data":"e55e6db12bc091de69952e0e4d9fe2c04ddaa0a5ca5e5c173912be87073539b1"} Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.485720 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=32.705700186 podStartE2EDuration="44.485699064s" podCreationTimestamp="2026-01-30 14:04:12 +0000 UTC" firstStartedPulling="2026-01-30 14:04:43.5478538 +0000 UTC m=+1294.249202331" lastFinishedPulling="2026-01-30 14:04:55.327852718 +0000 UTC m=+1306.029201209" observedRunningTime="2026-01-30 14:04:56.460415224 +0000 UTC m=+1307.161763725" watchObservedRunningTime="2026-01-30 14:04:56.485699064 +0000 UTC m=+1307.187047565" Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.488957 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.557550 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-56x4d" podStartSLOduration=33.955135829 podStartE2EDuration="42.557485834s" podCreationTimestamp="2026-01-30 14:04:14 +0000 UTC" firstStartedPulling="2026-01-30 14:04:44.700462877 +0000 UTC m=+1295.401811368" lastFinishedPulling="2026-01-30 14:04:53.302812882 +0000 UTC m=+1304.004161373" observedRunningTime="2026-01-30 14:04:56.543111352 +0000 UTC m=+1307.244459883" watchObservedRunningTime="2026-01-30 14:04:56.557485834 +0000 UTC m=+1307.258834325" Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.557835 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.576781 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=29.068562942 podStartE2EDuration="40.576760046s" podCreationTimestamp="2026-01-30 14:04:16 +0000 UTC" firstStartedPulling="2026-01-30 14:04:43.80357431 +0000 UTC m=+1294.504922801" lastFinishedPulling="2026-01-30 14:04:55.311771414 +0000 UTC m=+1306.013119905" observedRunningTime="2026-01-30 14:04:56.568106634 +0000 UTC m=+1307.269455155" watchObservedRunningTime="2026-01-30 14:04:56.576760046 +0000 UTC m=+1307.278108537" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.195914 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.246292 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:57 crc kubenswrapper[4793]: E0130 14:04:57.305384 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf45b0069_4cb7_4dfd_ac2d_1473cacbde1f.slice/crio-133d4bcbeb7456f153385eff906c7efb12649856c47bafc5796c8ad2d5657a75.scope\": RecentStats: unable to find data in memory cache]" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.452315 4793 generic.go:334] "Generic (PLEG): container finished" podID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" containerID="133d4bcbeb7456f153385eff906c7efb12649856c47bafc5796c8ad2d5657a75" exitCode=0 Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.452356 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f","Type":"ContainerDied","Data":"133d4bcbeb7456f153385eff906c7efb12649856c47bafc5796c8ad2d5657a75"} Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.455486 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" event={"ID":"4ebaeca8-f301-4d75-8691-98415ddcf7e2","Type":"ContainerStarted","Data":"b2b7d7383e6d798392eb551693b015b04e338eaf766fb65a0aced7e6d9610689"} Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.456657 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.459113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" event={"ID":"57f8cfde-399c-43ec-bf72-e96f12a05ae2","Type":"ContainerStarted","Data":"239a19f7152c99455b1d91f01ca7ce00ae83e90bc20fab1b576eaab8c2bb029f"} Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.465882 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.468494 4793 generic.go:334] "Generic (PLEG): container finished" podID="41e0025f-6abc-4554-b7a0-c132607aec86" containerID="a5f690625509d9f182522efae60dbd8b14b995b3093c366d0783ec9f47faf44f" exitCode=0 Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.470093 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"41e0025f-6abc-4554-b7a0-c132607aec86","Type":"ContainerDied","Data":"a5f690625509d9f182522efae60dbd8b14b995b3093c366d0783ec9f47faf44f"} Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.493928 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.493961 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.520989 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" podStartSLOduration=3.263026386 podStartE2EDuration="53.520926433s" podCreationTimestamp="2026-01-30 14:04:04 +0000 UTC" firstStartedPulling="2026-01-30 14:04:05.615033264 +0000 UTC m=+1256.316381755" lastFinishedPulling="2026-01-30 14:04:55.872933311 +0000 UTC m=+1306.574281802" observedRunningTime="2026-01-30 14:04:57.512612739 +0000 UTC m=+1308.213961270" watchObservedRunningTime="2026-01-30 14:04:57.520926433 +0000 UTC m=+1308.222274944" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.570395 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" podStartSLOduration=3.1576393 podStartE2EDuration="53.570375075s" podCreationTimestamp="2026-01-30 14:04:04 +0000 UTC" firstStartedPulling="2026-01-30 14:04:05.40361895 +0000 UTC m=+1256.104967441" lastFinishedPulling="2026-01-30 14:04:55.816354715 +0000 UTC m=+1306.517703216" observedRunningTime="2026-01-30 14:04:57.555398589 +0000 UTC m=+1308.256747080" watchObservedRunningTime="2026-01-30 14:04:57.570375075 +0000 UTC m=+1308.271723566" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.255240 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.478688 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f","Type":"ContainerStarted","Data":"d056557fce99c07acb071a67afa2e1446c3feab1b82855ca8a754b04b8e74676"} Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.487297 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"41e0025f-6abc-4554-b7a0-c132607aec86","Type":"ContainerStarted","Data":"dddf25c087963445e2a1fc98cd0aa5ea8ba0709bb8e76a65ef0bcde18ddca387"} Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.508760 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.422228817 podStartE2EDuration="53.508022923s" podCreationTimestamp="2026-01-30 14:04:05 +0000 UTC" firstStartedPulling="2026-01-30 14:04:08.220607304 +0000 UTC m=+1258.921955795" lastFinishedPulling="2026-01-30 14:04:53.30640141 +0000 UTC m=+1304.007749901" observedRunningTime="2026-01-30 14:04:58.502891727 +0000 UTC m=+1309.204240218" watchObservedRunningTime="2026-01-30 14:04:58.508022923 +0000 UTC m=+1309.209371414" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.530298 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.530362 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.551055 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.706068017 podStartE2EDuration="51.551025437s" podCreationTimestamp="2026-01-30 14:04:07 +0000 UTC" firstStartedPulling="2026-01-30 14:04:09.577814373 +0000 UTC m=+1260.279162864" lastFinishedPulling="2026-01-30 14:04:53.422771793 +0000 UTC m=+1304.124120284" observedRunningTime="2026-01-30 14:04:58.542420597 +0000 UTC m=+1309.243769088" watchObservedRunningTime="2026-01-30 14:04:58.551025437 +0000 UTC m=+1309.252373928" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.571560 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vfvss"] Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.617146 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-znzw5"] Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.618535 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.624248 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.657304 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-znzw5"] Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.688642 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vx7z5"] Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.689817 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.694314 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.726092 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vx7z5"] Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.787846 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-config\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.787899 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-combined-ca-bundle\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.787988 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/085da052-4aff-4c31-a5ac-398194b443a2-kube-api-access-h8tks\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788020 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-ovn-rundir\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788073 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-config\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788112 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-ovs-rundir\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788137 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8lf\" (UniqueName: \"kubernetes.io/projected/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-kube-api-access-rt8lf\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788155 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788173 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788187 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.884309 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891175 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-config\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891228 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-combined-ca-bundle\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891296 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/085da052-4aff-4c31-a5ac-398194b443a2-kube-api-access-h8tks\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891331 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-ovn-rundir\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891370 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-config\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891415 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-ovs-rundir\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891448 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8lf\" (UniqueName: \"kubernetes.io/projected/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-kube-api-access-rt8lf\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891472 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891494 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891512 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.892247 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-config\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.893033 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-config\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.906800 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-combined-ca-bundle\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.907427 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-ovn-rundir\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.907747 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-ovs-rundir\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.908492 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.909167 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.931782 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.935596 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8lf\" (UniqueName: \"kubernetes.io/projected/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-kube-api-access-rt8lf\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.945166 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/085da052-4aff-4c31-a5ac-398194b443a2-kube-api-access-h8tks\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.962199 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.035571 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.145692 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6twpw"] Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.187076 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn5sc"] Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.191500 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.195704 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.221294 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn5sc"] Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.245623 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.311785 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-config\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.311837 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.311872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.311986 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnxfv\" (UniqueName: \"kubernetes.io/projected/6997fc47-52ce-4421-b8bc-14ad27f1d522-kube-api-access-vnxfv\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.312077 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.413663 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnxfv\" (UniqueName: \"kubernetes.io/projected/6997fc47-52ce-4421-b8bc-14ad27f1d522-kube-api-access-vnxfv\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.414014 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.414118 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-config\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.414138 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.414183 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.414949 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.415101 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.415686 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.415724 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-config\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.440202 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnxfv\" (UniqueName: \"kubernetes.io/projected/6997fc47-52ce-4421-b8bc-14ad27f1d522-kube-api-access-vnxfv\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.506953 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerName="dnsmasq-dns" containerID="cri-o://239a19f7152c99455b1d91f01ca7ce00ae83e90bc20fab1b576eaab8c2bb029f" gracePeriod=10 Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.511158 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerName="dnsmasq-dns" containerID="cri-o://b2b7d7383e6d798392eb551693b015b04e338eaf766fb65a0aced7e6d9610689" gracePeriod=10 Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.584328 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.603385 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.605152 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.607169 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.607708 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.608160 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-g7cb6" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.608449 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.613564 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.800829 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vx7z5"] Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829253 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829306 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829332 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829370 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/270527bd-015e-4904-8916-07993e081611-config\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: W0130 14:04:59.829375 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eaf3033_e5f4_48bc_bdee_b7d97e57e765.slice/crio-bd9d37bdb1810d24827a2a5ee11a475d8d037c0789fa55b6595cd8fa830b73a8 WatchSource:0}: Error finding container bd9d37bdb1810d24827a2a5ee11a475d8d037c0789fa55b6595cd8fa830b73a8: Status 404 returned error can't find the container with id bd9d37bdb1810d24827a2a5ee11a475d8d037c0789fa55b6595cd8fa830b73a8 Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829429 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/270527bd-015e-4904-8916-07993e081611-scripts\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829453 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmtdm\" (UniqueName: \"kubernetes.io/projected/270527bd-015e-4904-8916-07993e081611-kube-api-access-qmtdm\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829521 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/270527bd-015e-4904-8916-07993e081611-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941421 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/270527bd-015e-4904-8916-07993e081611-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941794 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941828 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941883 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/270527bd-015e-4904-8916-07993e081611-config\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941945 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/270527bd-015e-4904-8916-07993e081611-scripts\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941964 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmtdm\" (UniqueName: \"kubernetes.io/projected/270527bd-015e-4904-8916-07993e081611-kube-api-access-qmtdm\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.942708 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/270527bd-015e-4904-8916-07993e081611-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.947577 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.947629 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/270527bd-015e-4904-8916-07993e081611-scripts\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.948007 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/270527bd-015e-4904-8916-07993e081611-config\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.951339 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.960884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.966996 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmtdm\" (UniqueName: \"kubernetes.io/projected/270527bd-015e-4904-8916-07993e081611-kube-api-access-qmtdm\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: W0130 14:04:59.973691 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod085da052_4aff_4c31_a5ac_398194b443a2.slice/crio-8efe50ff2f65655237cd1366a8e44ae9853ecb34e841c999f896cdadf8ea3a48 WatchSource:0}: Error finding container 8efe50ff2f65655237cd1366a8e44ae9853ecb34e841c999f896cdadf8ea3a48: Status 404 returned error can't find the container with id 8efe50ff2f65655237cd1366a8e44ae9853ecb34e841c999f896cdadf8ea3a48 Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.978819 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-znzw5"] Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.162232 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn5sc"] Jan 30 14:05:00 crc kubenswrapper[4793]: W0130 14:05:00.172247 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6997fc47_52ce_4421_b8bc_14ad27f1d522.slice/crio-47391653f861372e1e3bd8173c4ee89c976796812daa5ed1004201d7325a8f2f WatchSource:0}: Error finding container 47391653f861372e1e3bd8173c4ee89c976796812daa5ed1004201d7325a8f2f: Status 404 returned error can't find the container with id 47391653f861372e1e3bd8173c4ee89c976796812daa5ed1004201d7325a8f2f Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.239962 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.513966 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" event={"ID":"6997fc47-52ce-4421-b8bc-14ad27f1d522","Type":"ContainerStarted","Data":"47391653f861372e1e3bd8173c4ee89c976796812daa5ed1004201d7325a8f2f"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.516235 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e61af9bc-c79d-4e81-a602-37afbdc017a5","Type":"ContainerStarted","Data":"7b7669483d549eb24b141c74941db71192f0f6e724c0813bbeee9ca2352f85e8"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.517125 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.519577 4793 generic.go:334] "Generic (PLEG): container finished" podID="085da052-4aff-4c31-a5ac-398194b443a2" containerID="88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c" exitCode=0 Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.519684 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" event={"ID":"085da052-4aff-4c31-a5ac-398194b443a2","Type":"ContainerDied","Data":"88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.519715 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" event={"ID":"085da052-4aff-4c31-a5ac-398194b443a2","Type":"ContainerStarted","Data":"8efe50ff2f65655237cd1366a8e44ae9853ecb34e841c999f896cdadf8ea3a48"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.533357 4793 generic.go:334] "Generic (PLEG): container finished" podID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerID="b2b7d7383e6d798392eb551693b015b04e338eaf766fb65a0aced7e6d9610689" exitCode=0 Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.533454 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" event={"ID":"4ebaeca8-f301-4d75-8691-98415ddcf7e2","Type":"ContainerDied","Data":"b2b7d7383e6d798392eb551693b015b04e338eaf766fb65a0aced7e6d9610689"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.534915 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.639576914 podStartE2EDuration="49.534904732s" podCreationTimestamp="2026-01-30 14:04:11 +0000 UTC" firstStartedPulling="2026-01-30 14:04:12.034901976 +0000 UTC m=+1262.736250467" lastFinishedPulling="2026-01-30 14:04:58.930229794 +0000 UTC m=+1309.631578285" observedRunningTime="2026-01-30 14:05:00.533428126 +0000 UTC m=+1311.234776617" watchObservedRunningTime="2026-01-30 14:05:00.534904732 +0000 UTC m=+1311.236253223" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.539526 4793 generic.go:334] "Generic (PLEG): container finished" podID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerID="239a19f7152c99455b1d91f01ca7ce00ae83e90bc20fab1b576eaab8c2bb029f" exitCode=0 Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.539595 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" event={"ID":"57f8cfde-399c-43ec-bf72-e96f12a05ae2","Type":"ContainerDied","Data":"239a19f7152c99455b1d91f01ca7ce00ae83e90bc20fab1b576eaab8c2bb029f"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.554859 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vx7z5" event={"ID":"2eaf3033-e5f4-48bc-bdee-b7d97e57e765","Type":"ContainerStarted","Data":"f410276e211f4a96a871fa2d9e8b4c4d50ce43f15034df0d2b438a9f073dbdf6"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.554898 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vx7z5" event={"ID":"2eaf3033-e5f4-48bc-bdee-b7d97e57e765","Type":"ContainerStarted","Data":"bd9d37bdb1810d24827a2a5ee11a475d8d037c0789fa55b6595cd8fa830b73a8"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.583968 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vx7z5" podStartSLOduration=2.5839494739999997 podStartE2EDuration="2.583949474s" podCreationTimestamp="2026-01-30 14:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:00.58375464 +0000 UTC m=+1311.285103131" watchObservedRunningTime="2026-01-30 14:05:00.583949474 +0000 UTC m=+1311.285297965" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.628077 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.651967 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.767347 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-dns-svc\") pod \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.767425 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lhk6\" (UniqueName: \"kubernetes.io/projected/4ebaeca8-f301-4d75-8691-98415ddcf7e2-kube-api-access-7lhk6\") pod \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.767456 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-config\") pod \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.767489 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-config\") pod \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.767558 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-dns-svc\") pod \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.767665 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw6fw\" (UniqueName: \"kubernetes.io/projected/57f8cfde-399c-43ec-bf72-e96f12a05ae2-kube-api-access-mw6fw\") pod \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.776285 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ebaeca8-f301-4d75-8691-98415ddcf7e2-kube-api-access-7lhk6" (OuterVolumeSpecName: "kube-api-access-7lhk6") pod "4ebaeca8-f301-4d75-8691-98415ddcf7e2" (UID: "4ebaeca8-f301-4d75-8691-98415ddcf7e2"). InnerVolumeSpecName "kube-api-access-7lhk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.783011 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f8cfde-399c-43ec-bf72-e96f12a05ae2-kube-api-access-mw6fw" (OuterVolumeSpecName: "kube-api-access-mw6fw") pod "57f8cfde-399c-43ec-bf72-e96f12a05ae2" (UID: "57f8cfde-399c-43ec-bf72-e96f12a05ae2"). InnerVolumeSpecName "kube-api-access-mw6fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.824978 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ebaeca8-f301-4d75-8691-98415ddcf7e2" (UID: "4ebaeca8-f301-4d75-8691-98415ddcf7e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.837699 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-config" (OuterVolumeSpecName: "config") pod "57f8cfde-399c-43ec-bf72-e96f12a05ae2" (UID: "57f8cfde-399c-43ec-bf72-e96f12a05ae2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.838755 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57f8cfde-399c-43ec-bf72-e96f12a05ae2" (UID: "57f8cfde-399c-43ec-bf72-e96f12a05ae2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.838770 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-config" (OuterVolumeSpecName: "config") pod "4ebaeca8-f301-4d75-8691-98415ddcf7e2" (UID: "4ebaeca8-f301-4d75-8691-98415ddcf7e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.869709 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.869748 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lhk6\" (UniqueName: \"kubernetes.io/projected/4ebaeca8-f301-4d75-8691-98415ddcf7e2-kube-api-access-7lhk6\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.869758 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.869766 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.869773 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.869783 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw6fw\" (UniqueName: \"kubernetes.io/projected/57f8cfde-399c-43ec-bf72-e96f12a05ae2-kube-api-access-mw6fw\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.937461 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 14:05:00 crc kubenswrapper[4793]: W0130 14:05:00.955665 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod270527bd_015e_4904_8916_07993e081611.slice/crio-e47fc347968ce0ee2b82515fe6e633960e858ff09d5b117f3981643743bece28 WatchSource:0}: Error finding container e47fc347968ce0ee2b82515fe6e633960e858ff09d5b117f3981643743bece28: Status 404 returned error can't find the container with id e47fc347968ce0ee2b82515fe6e633960e858ff09d5b117f3981643743bece28 Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.405571 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-znzw5"] Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.454599 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-tp7zf"] Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.454894 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerName="dnsmasq-dns" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.454907 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerName="dnsmasq-dns" Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.454917 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerName="init" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.454923 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerName="init" Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.454954 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerName="init" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.454960 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerName="init" Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.454971 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerName="dnsmasq-dns" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.454977 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerName="dnsmasq-dns" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.455131 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerName="dnsmasq-dns" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.455149 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerName="dnsmasq-dns" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.455925 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.498571 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tp7zf"] Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.512226 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-config\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.512295 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.512333 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.512387 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-dns-svc\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.512466 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74tsm\" (UniqueName: \"kubernetes.io/projected/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-kube-api-access-74tsm\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.571987 4793 generic.go:334] "Generic (PLEG): container finished" podID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerID="dc354132d0a6cd02111dfdce273ff0e36cd8eedf4408a97ce6c6cb48e38782b8" exitCode=0 Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.572077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" event={"ID":"6997fc47-52ce-4421-b8bc-14ad27f1d522","Type":"ContainerDied","Data":"dc354132d0a6cd02111dfdce273ff0e36cd8eedf4408a97ce6c6cb48e38782b8"} Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.605727 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" event={"ID":"085da052-4aff-4c31-a5ac-398194b443a2","Type":"ContainerStarted","Data":"1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a"} Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.606918 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.613726 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74tsm\" (UniqueName: \"kubernetes.io/projected/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-kube-api-access-74tsm\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.613776 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-config\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.613804 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.613841 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.613897 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-dns-svc\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.614738 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-dns-svc\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.616225 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-config\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.616721 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.617288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.624032 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" event={"ID":"4ebaeca8-f301-4d75-8691-98415ddcf7e2","Type":"ContainerDied","Data":"a95902e824bd19a3e1746ccd97d0b63e3b3629d4c2754b4eeaeedb289cd0a81a"} Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.624114 4793 scope.go:117] "RemoveContainer" containerID="b2b7d7383e6d798392eb551693b015b04e338eaf766fb65a0aced7e6d9610689" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.624240 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.641471 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" event={"ID":"57f8cfde-399c-43ec-bf72-e96f12a05ae2","Type":"ContainerDied","Data":"b6d25f5f6c7c96e5312511cdf0154bdf3db1eff34982a8bfa221c443bb69496c"} Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.641619 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.641699 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" podStartSLOduration=3.641681555 podStartE2EDuration="3.641681555s" podCreationTimestamp="2026-01-30 14:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:01.635873353 +0000 UTC m=+1312.337221844" watchObservedRunningTime="2026-01-30 14:05:01.641681555 +0000 UTC m=+1312.343030046" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.648811 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"270527bd-015e-4904-8916-07993e081611","Type":"ContainerStarted","Data":"e47fc347968ce0ee2b82515fe6e633960e858ff09d5b117f3981643743bece28"} Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.658613 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74tsm\" (UniqueName: \"kubernetes.io/projected/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-kube-api-access-74tsm\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.738416 4793 scope.go:117] "RemoveContainer" containerID="3f70174b11e96cdd2d573d9ee24e4219762e2a0529f8d646d037440b2831590b" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.757800 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6twpw"] Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.771890 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6twpw"] Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.780253 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vfvss"] Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.786876 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vfvss"] Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.789077 4793 scope.go:117] "RemoveContainer" containerID="239a19f7152c99455b1d91f01ca7ce00ae83e90bc20fab1b576eaab8c2bb029f" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.808659 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.854961 4793 scope.go:117] "RemoveContainer" containerID="e55e6db12bc091de69952e0e4d9fe2c04ddaa0a5ca5e5c173912be87073539b1" Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.877585 4793 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 30 14:05:01 crc kubenswrapper[4793]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/6997fc47-52ce-4421-b8bc-14ad27f1d522/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 14:05:01 crc kubenswrapper[4793]: > podSandboxID="47391653f861372e1e3bd8173c4ee89c976796812daa5ed1004201d7325a8f2f" Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.877795 4793 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 14:05:01 crc kubenswrapper[4793]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnxfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86db49b7ff-jn5sc_openstack(6997fc47-52ce-4421-b8bc-14ad27f1d522): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/6997fc47-52ce-4421-b8bc-14ad27f1d522/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 14:05:01 crc kubenswrapper[4793]: > logger="UnhandledError" Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.879957 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/6997fc47-52ce-4421-b8bc-14ad27f1d522/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.322810 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tp7zf"] Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.412097 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" path="/var/lib/kubelet/pods/4ebaeca8-f301-4d75-8691-98415ddcf7e2/volumes" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.413546 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" path="/var/lib/kubelet/pods/57f8cfde-399c-43ec-bf72-e96f12a05ae2/volumes" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.631005 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.640938 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.643123 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.643564 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.650169 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-vvrcq" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.657909 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.658487 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tp7zf" event={"ID":"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1","Type":"ContainerStarted","Data":"d2be4624f88c54b308ce347e2279d0b4015189b7a8bfe3be6bc12fc678ca01b1"} Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.658867 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tp7zf" event={"ID":"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1","Type":"ContainerStarted","Data":"d3a25e8a3b91c8c4040360de5d0cfe31c348e5b8ddffa9f734cc6f66d6f94231"} Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.660617 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.662413 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" podUID="085da052-4aff-4c31-a5ac-398194b443a2" containerName="dnsmasq-dns" containerID="cri-o://1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a" gracePeriod=10 Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.835097 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dgdw\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-kube-api-access-5dgdw\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.835158 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/76182868-5b55-403e-a2be-0c6879e9a2e0-cache\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.835188 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.835235 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76182868-5b55-403e-a2be-0c6879e9a2e0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.835310 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.835397 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/76182868-5b55-403e-a2be-0c6879e9a2e0-lock\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.936978 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.937102 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/76182868-5b55-403e-a2be-0c6879e9a2e0-lock\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.937177 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dgdw\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-kube-api-access-5dgdw\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.937216 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/76182868-5b55-403e-a2be-0c6879e9a2e0-cache\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.937250 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.937279 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76182868-5b55-403e-a2be-0c6879e9a2e0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.942126 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76182868-5b55-403e-a2be-0c6879e9a2e0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: E0130 14:05:02.942278 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 14:05:02 crc kubenswrapper[4793]: E0130 14:05:02.942294 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 14:05:02 crc kubenswrapper[4793]: E0130 14:05:02.942343 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift podName:76182868-5b55-403e-a2be-0c6879e9a2e0 nodeName:}" failed. No retries permitted until 2026-01-30 14:05:03.442323571 +0000 UTC m=+1314.143672072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift") pod "swift-storage-0" (UID: "76182868-5b55-403e-a2be-0c6879e9a2e0") : configmap "swift-ring-files" not found Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.942917 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/76182868-5b55-403e-a2be-0c6879e9a2e0-cache\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.943180 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.950762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/76182868-5b55-403e-a2be-0c6879e9a2e0-lock\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.971550 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dgdw\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-kube-api-access-5dgdw\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.979339 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.024233 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.154421 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.204670 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.352209 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-dns-svc\") pod \"085da052-4aff-4c31-a5ac-398194b443a2\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.352288 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-ovsdbserver-sb\") pod \"085da052-4aff-4c31-a5ac-398194b443a2\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.352385 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-config\") pod \"085da052-4aff-4c31-a5ac-398194b443a2\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.352523 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/085da052-4aff-4c31-a5ac-398194b443a2-kube-api-access-h8tks\") pod \"085da052-4aff-4c31-a5ac-398194b443a2\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.355454 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085da052-4aff-4c31-a5ac-398194b443a2-kube-api-access-h8tks" (OuterVolumeSpecName: "kube-api-access-h8tks") pod "085da052-4aff-4c31-a5ac-398194b443a2" (UID: "085da052-4aff-4c31-a5ac-398194b443a2"). InnerVolumeSpecName "kube-api-access-h8tks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.398448 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "085da052-4aff-4c31-a5ac-398194b443a2" (UID: "085da052-4aff-4c31-a5ac-398194b443a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.406643 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "085da052-4aff-4c31-a5ac-398194b443a2" (UID: "085da052-4aff-4c31-a5ac-398194b443a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.407019 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-config" (OuterVolumeSpecName: "config") pod "085da052-4aff-4c31-a5ac-398194b443a2" (UID: "085da052-4aff-4c31-a5ac-398194b443a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.454879 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.455007 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.455019 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/085da052-4aff-4c31-a5ac-398194b443a2-kube-api-access-h8tks\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.455029 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.455037 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:03 crc kubenswrapper[4793]: E0130 14:05:03.455188 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 14:05:03 crc kubenswrapper[4793]: E0130 14:05:03.455200 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 14:05:03 crc kubenswrapper[4793]: E0130 14:05:03.455242 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift podName:76182868-5b55-403e-a2be-0c6879e9a2e0 nodeName:}" failed. No retries permitted until 2026-01-30 14:05:04.455228065 +0000 UTC m=+1315.156576556 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift") pod "swift-storage-0" (UID: "76182868-5b55-403e-a2be-0c6879e9a2e0") : configmap "swift-ring-files" not found Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.670626 4793 generic.go:334] "Generic (PLEG): container finished" podID="085da052-4aff-4c31-a5ac-398194b443a2" containerID="1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a" exitCode=0 Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.670816 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" event={"ID":"085da052-4aff-4c31-a5ac-398194b443a2","Type":"ContainerDied","Data":"1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a"} Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.671059 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" event={"ID":"085da052-4aff-4c31-a5ac-398194b443a2","Type":"ContainerDied","Data":"8efe50ff2f65655237cd1366a8e44ae9853ecb34e841c999f896cdadf8ea3a48"} Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.671083 4793 scope.go:117] "RemoveContainer" containerID="1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.670884 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.672883 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"270527bd-015e-4904-8916-07993e081611","Type":"ContainerStarted","Data":"948b5e724679b27c5ada2e3f8910371798d67929a4b80ce0d2918a8a15b29f5a"} Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.672906 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"270527bd-015e-4904-8916-07993e081611","Type":"ContainerStarted","Data":"59484b445fb7c7331b9d0dae505879134106f5a9ba82505de133080004eaa949"} Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.672964 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.678294 4793 generic.go:334] "Generic (PLEG): container finished" podID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerID="d2be4624f88c54b308ce347e2279d0b4015189b7a8bfe3be6bc12fc678ca01b1" exitCode=0 Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.678370 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tp7zf" event={"ID":"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1","Type":"ContainerDied","Data":"d2be4624f88c54b308ce347e2279d0b4015189b7a8bfe3be6bc12fc678ca01b1"} Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.680412 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" event={"ID":"6997fc47-52ce-4421-b8bc-14ad27f1d522","Type":"ContainerStarted","Data":"3e1ef38e5cfd835a2baa7a28e840d23b75da33fc0616ea9a4ca3947c32a19262"} Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.680823 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.691075 4793 scope.go:117] "RemoveContainer" containerID="88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.715399 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.275388771 podStartE2EDuration="4.715373473s" podCreationTimestamp="2026-01-30 14:04:59 +0000 UTC" firstStartedPulling="2026-01-30 14:05:00.960845304 +0000 UTC m=+1311.662193795" lastFinishedPulling="2026-01-30 14:05:02.400830006 +0000 UTC m=+1313.102178497" observedRunningTime="2026-01-30 14:05:03.701439282 +0000 UTC m=+1314.402787773" watchObservedRunningTime="2026-01-30 14:05:03.715373473 +0000 UTC m=+1314.416721964" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.719835 4793 scope.go:117] "RemoveContainer" containerID="1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a" Jan 30 14:05:03 crc kubenswrapper[4793]: E0130 14:05:03.720477 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a\": container with ID starting with 1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a not found: ID does not exist" containerID="1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.720511 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a"} err="failed to get container status \"1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a\": rpc error: code = NotFound desc = could not find container \"1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a\": container with ID starting with 1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a not found: ID does not exist" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.720536 4793 scope.go:117] "RemoveContainer" containerID="88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c" Jan 30 14:05:03 crc kubenswrapper[4793]: E0130 14:05:03.721009 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c\": container with ID starting with 88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c not found: ID does not exist" containerID="88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.721150 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c"} err="failed to get container status \"88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c\": rpc error: code = NotFound desc = could not find container \"88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c\": container with ID starting with 88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c not found: ID does not exist" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.754862 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" podStartSLOduration=4.754846511 podStartE2EDuration="4.754846511s" podCreationTimestamp="2026-01-30 14:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:03.744321383 +0000 UTC m=+1314.445669874" watchObservedRunningTime="2026-01-30 14:05:03.754846511 +0000 UTC m=+1314.456195002" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.769739 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-znzw5"] Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.777188 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-znzw5"] Jan 30 14:05:04 crc kubenswrapper[4793]: I0130 14:05:04.407860 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085da052-4aff-4c31-a5ac-398194b443a2" path="/var/lib/kubelet/pods/085da052-4aff-4c31-a5ac-398194b443a2/volumes" Jan 30 14:05:04 crc kubenswrapper[4793]: I0130 14:05:04.472210 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:04 crc kubenswrapper[4793]: E0130 14:05:04.472454 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 14:05:04 crc kubenswrapper[4793]: E0130 14:05:04.472493 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 14:05:04 crc kubenswrapper[4793]: E0130 14:05:04.472562 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift podName:76182868-5b55-403e-a2be-0c6879e9a2e0 nodeName:}" failed. No retries permitted until 2026-01-30 14:05:06.472540556 +0000 UTC m=+1317.173889067 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift") pod "swift-storage-0" (UID: "76182868-5b55-403e-a2be-0c6879e9a2e0") : configmap "swift-ring-files" not found Jan 30 14:05:04 crc kubenswrapper[4793]: I0130 14:05:04.691779 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tp7zf" event={"ID":"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1","Type":"ContainerStarted","Data":"610455f7ee877cbfe48a7dcf3922577b44a3ba262f3673e879a83bee7f9c298d"} Jan 30 14:05:04 crc kubenswrapper[4793]: I0130 14:05:04.692891 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:04 crc kubenswrapper[4793]: I0130 14:05:04.714435 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-tp7zf" podStartSLOduration=3.714416636 podStartE2EDuration="3.714416636s" podCreationTimestamp="2026-01-30 14:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:04.709729701 +0000 UTC m=+1315.411078192" watchObservedRunningTime="2026-01-30 14:05:04.714416636 +0000 UTC m=+1315.415765127" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.474801 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-q459t"] Jan 30 14:05:06 crc kubenswrapper[4793]: E0130 14:05:06.475421 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085da052-4aff-4c31-a5ac-398194b443a2" containerName="dnsmasq-dns" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.475436 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="085da052-4aff-4c31-a5ac-398194b443a2" containerName="dnsmasq-dns" Jan 30 14:05:06 crc kubenswrapper[4793]: E0130 14:05:06.475452 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085da052-4aff-4c31-a5ac-398194b443a2" containerName="init" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.475458 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="085da052-4aff-4c31-a5ac-398194b443a2" containerName="init" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.475612 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="085da052-4aff-4c31-a5ac-398194b443a2" containerName="dnsmasq-dns" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.476135 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.478274 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.479274 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.479588 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.493409 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-q459t"] Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.518937 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:06 crc kubenswrapper[4793]: E0130 14:05:06.519535 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 14:05:06 crc kubenswrapper[4793]: E0130 14:05:06.519557 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 14:05:06 crc kubenswrapper[4793]: E0130 14:05:06.519602 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift podName:76182868-5b55-403e-a2be-0c6879e9a2e0 nodeName:}" failed. No retries permitted until 2026-01-30 14:05:10.519586921 +0000 UTC m=+1321.220935412 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift") pod "swift-storage-0" (UID: "76182868-5b55-403e-a2be-0c6879e9a2e0") : configmap "swift-ring-files" not found Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.620615 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-swiftconf\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.620658 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-ring-data-devices\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.620675 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/50011731-846f-4e86-8664-f9c797dc64ed-etc-swift\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.620696 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4s46\" (UniqueName: \"kubernetes.io/projected/50011731-846f-4e86-8664-f9c797dc64ed-kube-api-access-h4s46\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.620958 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-dispersionconf\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.621074 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-scripts\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.621179 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-combined-ca-bundle\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.729957 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-swiftconf\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730254 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-ring-data-devices\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730363 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/50011731-846f-4e86-8664-f9c797dc64ed-etc-swift\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730456 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4s46\" (UniqueName: \"kubernetes.io/projected/50011731-846f-4e86-8664-f9c797dc64ed-kube-api-access-h4s46\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730638 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-dispersionconf\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730842 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-scripts\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730991 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-combined-ca-bundle\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730881 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/50011731-846f-4e86-8664-f9c797dc64ed-etc-swift\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.731377 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-ring-data-devices\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.731709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-scripts\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.736194 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-swiftconf\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.739760 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-combined-ca-bundle\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.750027 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-dispersionconf\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.756763 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4s46\" (UniqueName: \"kubernetes.io/projected/50011731-846f-4e86-8664-f9c797dc64ed-kube-api-access-h4s46\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.793847 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.079375 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-x9wgt"] Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.080588 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.082693 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.087828 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-x9wgt"] Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.240137 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr5j8\" (UniqueName: \"kubernetes.io/projected/1fd3bf73-817a-402e-866c-8a91e0bc2428-kube-api-access-sr5j8\") pod \"root-account-create-update-x9wgt\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.240199 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd3bf73-817a-402e-866c-8a91e0bc2428-operator-scripts\") pod \"root-account-create-update-x9wgt\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.254907 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-q459t"] Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.341334 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr5j8\" (UniqueName: \"kubernetes.io/projected/1fd3bf73-817a-402e-866c-8a91e0bc2428-kube-api-access-sr5j8\") pod \"root-account-create-update-x9wgt\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.341397 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd3bf73-817a-402e-866c-8a91e0bc2428-operator-scripts\") pod \"root-account-create-update-x9wgt\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.342211 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd3bf73-817a-402e-866c-8a91e0bc2428-operator-scripts\") pod \"root-account-create-update-x9wgt\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.364075 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr5j8\" (UniqueName: \"kubernetes.io/projected/1fd3bf73-817a-402e-866c-8a91e0bc2428-kube-api-access-sr5j8\") pod \"root-account-create-update-x9wgt\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.403616 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.467521 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.468785 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.563708 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.720256 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q459t" event={"ID":"50011731-846f-4e86-8664-f9c797dc64ed","Type":"ContainerStarted","Data":"dfcd68a21a6ccc777d3dfdabb9d0541bc18ef4395d6201dad4b19a23446f3679"} Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.855545 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.892597 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-x9wgt"] Jan 30 14:05:08 crc kubenswrapper[4793]: I0130 14:05:08.734323 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x9wgt" event={"ID":"1fd3bf73-817a-402e-866c-8a91e0bc2428","Type":"ContainerStarted","Data":"d6ac5e8cc6b63af60a4456f31c6bd2647365686983f5e5af22d83b768d333382"} Jan 30 14:05:08 crc kubenswrapper[4793]: I0130 14:05:08.734400 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x9wgt" event={"ID":"1fd3bf73-817a-402e-866c-8a91e0bc2428","Type":"ContainerStarted","Data":"ea2f9a9f4498165ce27de35a5cb85dff750b4522c42a1e477432a11404a3b30e"} Jan 30 14:05:08 crc kubenswrapper[4793]: I0130 14:05:08.760348 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-x9wgt" podStartSLOduration=1.760331045 podStartE2EDuration="1.760331045s" podCreationTimestamp="2026-01-30 14:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:08.754984415 +0000 UTC m=+1319.456332946" watchObservedRunningTime="2026-01-30 14:05:08.760331045 +0000 UTC m=+1319.461679536" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.474670 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-8pwcc"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.476132 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.480845 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8pwcc"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.546704 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ff11-account-create-update-p5nhq"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.547825 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.551013 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.559555 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ff11-account-create-update-p5nhq"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.584370 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98986ea8-62f3-4716-9451-0e13567ec2a1-operator-scripts\") pod \"glance-db-create-8pwcc\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.584442 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv8gx\" (UniqueName: \"kubernetes.io/projected/98986ea8-62f3-4716-9451-0e13567ec2a1-kube-api-access-bv8gx\") pod \"glance-db-create-8pwcc\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.590221 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.685964 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81f2e71-1a70-491f-ba0c-ad1a456345c8-operator-scripts\") pod \"glance-ff11-account-create-update-p5nhq\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.686020 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98986ea8-62f3-4716-9451-0e13567ec2a1-operator-scripts\") pod \"glance-db-create-8pwcc\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.686209 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm626\" (UniqueName: \"kubernetes.io/projected/f81f2e71-1a70-491f-ba0c-ad1a456345c8-kube-api-access-vm626\") pod \"glance-ff11-account-create-update-p5nhq\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.686297 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv8gx\" (UniqueName: \"kubernetes.io/projected/98986ea8-62f3-4716-9451-0e13567ec2a1-kube-api-access-bv8gx\") pod \"glance-db-create-8pwcc\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.688286 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98986ea8-62f3-4716-9451-0e13567ec2a1-operator-scripts\") pod \"glance-db-create-8pwcc\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.732692 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv8gx\" (UniqueName: \"kubernetes.io/projected/98986ea8-62f3-4716-9451-0e13567ec2a1-kube-api-access-bv8gx\") pod \"glance-db-create-8pwcc\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.752076 4793 generic.go:334] "Generic (PLEG): container finished" podID="1fd3bf73-817a-402e-866c-8a91e0bc2428" containerID="d6ac5e8cc6b63af60a4456f31c6bd2647365686983f5e5af22d83b768d333382" exitCode=0 Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.753291 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x9wgt" event={"ID":"1fd3bf73-817a-402e-866c-8a91e0bc2428","Type":"ContainerDied","Data":"d6ac5e8cc6b63af60a4456f31c6bd2647365686983f5e5af22d83b768d333382"} Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.788413 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81f2e71-1a70-491f-ba0c-ad1a456345c8-operator-scripts\") pod \"glance-ff11-account-create-update-p5nhq\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.788511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm626\" (UniqueName: \"kubernetes.io/projected/f81f2e71-1a70-491f-ba0c-ad1a456345c8-kube-api-access-vm626\") pod \"glance-ff11-account-create-update-p5nhq\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.789671 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81f2e71-1a70-491f-ba0c-ad1a456345c8-operator-scripts\") pod \"glance-ff11-account-create-update-p5nhq\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.796381 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.814649 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm626\" (UniqueName: \"kubernetes.io/projected/f81f2e71-1a70-491f-ba0c-ad1a456345c8-kube-api-access-vm626\") pod \"glance-ff11-account-create-update-p5nhq\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.848655 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-tq6pw"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.849750 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.857028 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tq6pw"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.866436 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.947374 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-22a6-account-create-update-59kzd"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.948375 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.952288 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.996632 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr8tg\" (UniqueName: \"kubernetes.io/projected/b3f03641-1e63-4c88-a1f4-f58cf0d81883-kube-api-access-pr8tg\") pod \"keystone-db-create-tq6pw\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.996711 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f03641-1e63-4c88-a1f4-f58cf0d81883-operator-scripts\") pod \"keystone-db-create-tq6pw\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.013211 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-22a6-account-create-update-59kzd"] Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.098661 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5gw6\" (UniqueName: \"kubernetes.io/projected/563516b7-0256-4c05-b1d1-3aa03d692afb-kube-api-access-t5gw6\") pod \"keystone-22a6-account-create-update-59kzd\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.098714 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr8tg\" (UniqueName: \"kubernetes.io/projected/b3f03641-1e63-4c88-a1f4-f58cf0d81883-kube-api-access-pr8tg\") pod \"keystone-db-create-tq6pw\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.098807 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f03641-1e63-4c88-a1f4-f58cf0d81883-operator-scripts\") pod \"keystone-db-create-tq6pw\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.098863 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/563516b7-0256-4c05-b1d1-3aa03d692afb-operator-scripts\") pod \"keystone-22a6-account-create-update-59kzd\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.099631 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f03641-1e63-4c88-a1f4-f58cf0d81883-operator-scripts\") pod \"keystone-db-create-tq6pw\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.117263 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr8tg\" (UniqueName: \"kubernetes.io/projected/b3f03641-1e63-4c88-a1f4-f58cf0d81883-kube-api-access-pr8tg\") pod \"keystone-db-create-tq6pw\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.166685 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.200683 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/563516b7-0256-4c05-b1d1-3aa03d692afb-operator-scripts\") pod \"keystone-22a6-account-create-update-59kzd\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.200833 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5gw6\" (UniqueName: \"kubernetes.io/projected/563516b7-0256-4c05-b1d1-3aa03d692afb-kube-api-access-t5gw6\") pod \"keystone-22a6-account-create-update-59kzd\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.201540 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/563516b7-0256-4c05-b1d1-3aa03d692afb-operator-scripts\") pod \"keystone-22a6-account-create-update-59kzd\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.217336 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5gw6\" (UniqueName: \"kubernetes.io/projected/563516b7-0256-4c05-b1d1-3aa03d692afb-kube-api-access-t5gw6\") pod \"keystone-22a6-account-create-update-59kzd\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.305924 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-gbcdm"] Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.307143 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.329704 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.333390 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gbcdm"] Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.395901 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3a9f-account-create-update-zkbvj"] Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.397449 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.405866 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d0f274e-c187-4f1a-aa78-508b1761f9fb-operator-scripts\") pod \"placement-db-create-gbcdm\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.406120 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfwd6\" (UniqueName: \"kubernetes.io/projected/6d0f274e-c187-4f1a-aa78-508b1761f9fb-kube-api-access-tfwd6\") pod \"placement-db-create-gbcdm\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.410818 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.437866 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3a9f-account-create-update-zkbvj"] Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.508095 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d0f274e-c187-4f1a-aa78-508b1761f9fb-operator-scripts\") pod \"placement-db-create-gbcdm\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.508167 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62fbb159-dc72-4c34-b2b7-5be6be4df981-operator-scripts\") pod \"placement-3a9f-account-create-update-zkbvj\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.508196 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97zzn\" (UniqueName: \"kubernetes.io/projected/62fbb159-dc72-4c34-b2b7-5be6be4df981-kube-api-access-97zzn\") pod \"placement-3a9f-account-create-update-zkbvj\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.508299 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfwd6\" (UniqueName: \"kubernetes.io/projected/6d0f274e-c187-4f1a-aa78-508b1761f9fb-kube-api-access-tfwd6\") pod \"placement-db-create-gbcdm\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.508707 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d0f274e-c187-4f1a-aa78-508b1761f9fb-operator-scripts\") pod \"placement-db-create-gbcdm\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.537833 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfwd6\" (UniqueName: \"kubernetes.io/projected/6d0f274e-c187-4f1a-aa78-508b1761f9fb-kube-api-access-tfwd6\") pod \"placement-db-create-gbcdm\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.610468 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62fbb159-dc72-4c34-b2b7-5be6be4df981-operator-scripts\") pod \"placement-3a9f-account-create-update-zkbvj\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.610805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97zzn\" (UniqueName: \"kubernetes.io/projected/62fbb159-dc72-4c34-b2b7-5be6be4df981-kube-api-access-97zzn\") pod \"placement-3a9f-account-create-update-zkbvj\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.611039 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.611125 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62fbb159-dc72-4c34-b2b7-5be6be4df981-operator-scripts\") pod \"placement-3a9f-account-create-update-zkbvj\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: E0130 14:05:10.611234 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 14:05:10 crc kubenswrapper[4793]: E0130 14:05:10.611410 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 14:05:10 crc kubenswrapper[4793]: E0130 14:05:10.611513 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift podName:76182868-5b55-403e-a2be-0c6879e9a2e0 nodeName:}" failed. No retries permitted until 2026-01-30 14:05:18.611500219 +0000 UTC m=+1329.312848700 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift") pod "swift-storage-0" (UID: "76182868-5b55-403e-a2be-0c6879e9a2e0") : configmap "swift-ring-files" not found Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.627973 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97zzn\" (UniqueName: \"kubernetes.io/projected/62fbb159-dc72-4c34-b2b7-5be6be4df981-kube-api-access-97zzn\") pod \"placement-3a9f-account-create-update-zkbvj\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.632009 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.726315 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:11 crc kubenswrapper[4793]: I0130 14:05:11.388171 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 14:05:11 crc kubenswrapper[4793]: I0130 14:05:11.810433 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:11 crc kubenswrapper[4793]: I0130 14:05:11.868121 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn5sc"] Jan 30 14:05:11 crc kubenswrapper[4793]: I0130 14:05:11.868405 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerName="dnsmasq-dns" containerID="cri-o://3e1ef38e5cfd835a2baa7a28e840d23b75da33fc0616ea9a4ca3947c32a19262" gracePeriod=10 Jan 30 14:05:12 crc kubenswrapper[4793]: I0130 14:05:12.784636 4793 generic.go:334] "Generic (PLEG): container finished" podID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerID="3e1ef38e5cfd835a2baa7a28e840d23b75da33fc0616ea9a4ca3947c32a19262" exitCode=0 Jan 30 14:05:12 crc kubenswrapper[4793]: I0130 14:05:12.784685 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" event={"ID":"6997fc47-52ce-4421-b8bc-14ad27f1d522","Type":"ContainerDied","Data":"3e1ef38e5cfd835a2baa7a28e840d23b75da33fc0616ea9a4ca3947c32a19262"} Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.437475 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.559766 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr5j8\" (UniqueName: \"kubernetes.io/projected/1fd3bf73-817a-402e-866c-8a91e0bc2428-kube-api-access-sr5j8\") pod \"1fd3bf73-817a-402e-866c-8a91e0bc2428\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.559969 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd3bf73-817a-402e-866c-8a91e0bc2428-operator-scripts\") pod \"1fd3bf73-817a-402e-866c-8a91e0bc2428\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.561274 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fd3bf73-817a-402e-866c-8a91e0bc2428-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1fd3bf73-817a-402e-866c-8a91e0bc2428" (UID: "1fd3bf73-817a-402e-866c-8a91e0bc2428"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.567759 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd3bf73-817a-402e-866c-8a91e0bc2428-kube-api-access-sr5j8" (OuterVolumeSpecName: "kube-api-access-sr5j8") pod "1fd3bf73-817a-402e-866c-8a91e0bc2428" (UID: "1fd3bf73-817a-402e-866c-8a91e0bc2428"). InnerVolumeSpecName "kube-api-access-sr5j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.662462 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd3bf73-817a-402e-866c-8a91e0bc2428-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.662492 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr5j8\" (UniqueName: \"kubernetes.io/projected/1fd3bf73-817a-402e-866c-8a91e0bc2428-kube-api-access-sr5j8\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.673472 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.765550 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-nb\") pod \"6997fc47-52ce-4421-b8bc-14ad27f1d522\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.765684 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-config\") pod \"6997fc47-52ce-4421-b8bc-14ad27f1d522\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.765710 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnxfv\" (UniqueName: \"kubernetes.io/projected/6997fc47-52ce-4421-b8bc-14ad27f1d522-kube-api-access-vnxfv\") pod \"6997fc47-52ce-4421-b8bc-14ad27f1d522\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.765728 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-sb\") pod \"6997fc47-52ce-4421-b8bc-14ad27f1d522\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.765798 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-dns-svc\") pod \"6997fc47-52ce-4421-b8bc-14ad27f1d522\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.784245 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6997fc47-52ce-4421-b8bc-14ad27f1d522-kube-api-access-vnxfv" (OuterVolumeSpecName: "kube-api-access-vnxfv") pod "6997fc47-52ce-4421-b8bc-14ad27f1d522" (UID: "6997fc47-52ce-4421-b8bc-14ad27f1d522"). InnerVolumeSpecName "kube-api-access-vnxfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.812146 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" event={"ID":"6997fc47-52ce-4421-b8bc-14ad27f1d522","Type":"ContainerDied","Data":"47391653f861372e1e3bd8173c4ee89c976796812daa5ed1004201d7325a8f2f"} Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.812387 4793 scope.go:117] "RemoveContainer" containerID="3e1ef38e5cfd835a2baa7a28e840d23b75da33fc0616ea9a4ca3947c32a19262" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.812517 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.818261 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.817976 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x9wgt" event={"ID":"1fd3bf73-817a-402e-866c-8a91e0bc2428","Type":"ContainerDied","Data":"ea2f9a9f4498165ce27de35a5cb85dff750b4522c42a1e477432a11404a3b30e"} Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.818466 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea2f9a9f4498165ce27de35a5cb85dff750b4522c42a1e477432a11404a3b30e" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.835289 4793 scope.go:117] "RemoveContainer" containerID="dc354132d0a6cd02111dfdce273ff0e36cd8eedf4408a97ce6c6cb48e38782b8" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.836441 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6997fc47-52ce-4421-b8bc-14ad27f1d522" (UID: "6997fc47-52ce-4421-b8bc-14ad27f1d522"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.847363 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6997fc47-52ce-4421-b8bc-14ad27f1d522" (UID: "6997fc47-52ce-4421-b8bc-14ad27f1d522"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.863495 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6997fc47-52ce-4421-b8bc-14ad27f1d522" (UID: "6997fc47-52ce-4421-b8bc-14ad27f1d522"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.863622 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-config" (OuterVolumeSpecName: "config") pod "6997fc47-52ce-4421-b8bc-14ad27f1d522" (UID: "6997fc47-52ce-4421-b8bc-14ad27f1d522"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.867952 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.868101 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.868176 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.868265 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnxfv\" (UniqueName: \"kubernetes.io/projected/6997fc47-52ce-4421-b8bc-14ad27f1d522-kube-api-access-vnxfv\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.868337 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: W0130 14:05:13.914097 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d0f274e_c187_4f1a_aa78_508b1761f9fb.slice/crio-1039ce097a065ceb7f6cbd6b3b5d6e73401a103ef33341c42a54ecdb3c2e9be8 WatchSource:0}: Error finding container 1039ce097a065ceb7f6cbd6b3b5d6e73401a103ef33341c42a54ecdb3c2e9be8: Status 404 returned error can't find the container with id 1039ce097a065ceb7f6cbd6b3b5d6e73401a103ef33341c42a54ecdb3c2e9be8 Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.926917 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gbcdm"] Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.926964 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8pwcc"] Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.059207 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tq6pw"] Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.081136 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3a9f-account-create-update-zkbvj"] Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.087498 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ff11-account-create-update-p5nhq"] Jan 30 14:05:14 crc kubenswrapper[4793]: W0130 14:05:14.098175 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62fbb159_dc72_4c34_b2b7_5be6be4df981.slice/crio-cdab6e776d028e9251c9333022bcb3bff90331c0dec32cedbd959678ebc24028 WatchSource:0}: Error finding container cdab6e776d028e9251c9333022bcb3bff90331c0dec32cedbd959678ebc24028: Status 404 returned error can't find the container with id cdab6e776d028e9251c9333022bcb3bff90331c0dec32cedbd959678ebc24028 Jan 30 14:05:14 crc kubenswrapper[4793]: W0130 14:05:14.100468 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf81f2e71_1a70_491f_ba0c_ad1a456345c8.slice/crio-1635e22d747e1e9ecdb13fd83e4f66247ad344b78ffe852aa12ec1f91c0d069e WatchSource:0}: Error finding container 1635e22d747e1e9ecdb13fd83e4f66247ad344b78ffe852aa12ec1f91c0d069e: Status 404 returned error can't find the container with id 1635e22d747e1e9ecdb13fd83e4f66247ad344b78ffe852aa12ec1f91c0d069e Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.127655 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.150181 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn5sc"] Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.155837 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn5sc"] Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.211253 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-22a6-account-create-update-59kzd"] Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.240501 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.408400 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" path="/var/lib/kubelet/pods/6997fc47-52ce-4421-b8bc-14ad27f1d522/volumes" Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.826233 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ff11-account-create-update-p5nhq" event={"ID":"f81f2e71-1a70-491f-ba0c-ad1a456345c8","Type":"ContainerStarted","Data":"43a04a7b0ede88204c3ce58512e165ac71ea34ba165695393273ca8c2ab37053"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.826558 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ff11-account-create-update-p5nhq" event={"ID":"f81f2e71-1a70-491f-ba0c-ad1a456345c8","Type":"ContainerStarted","Data":"1635e22d747e1e9ecdb13fd83e4f66247ad344b78ffe852aa12ec1f91c0d069e"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.828954 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q459t" event={"ID":"50011731-846f-4e86-8664-f9c797dc64ed","Type":"ContainerStarted","Data":"a1b8fa0ad1007024e2a758d432cfe8f804db4960d86814b080a404a5d1c5e7dd"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.831642 4793 generic.go:334] "Generic (PLEG): container finished" podID="b3f03641-1e63-4c88-a1f4-f58cf0d81883" containerID="3efaeb1f3745caf5c2ff18e628906fd2ae05a6952ec9376aacd048e2c31a3cdb" exitCode=0 Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.831704 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tq6pw" event={"ID":"b3f03641-1e63-4c88-a1f4-f58cf0d81883","Type":"ContainerDied","Data":"3efaeb1f3745caf5c2ff18e628906fd2ae05a6952ec9376aacd048e2c31a3cdb"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.831725 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tq6pw" event={"ID":"b3f03641-1e63-4c88-a1f4-f58cf0d81883","Type":"ContainerStarted","Data":"a9e447eeda31cacf6f4b15b396de8b08fe6fa521839c2bcdccd64834364aae1e"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.833757 4793 generic.go:334] "Generic (PLEG): container finished" podID="98986ea8-62f3-4716-9451-0e13567ec2a1" containerID="2bc34dab4f37d7b6429a87926db0d3a5178ff268821d2ee975bfe47cb007e77b" exitCode=0 Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.833811 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8pwcc" event={"ID":"98986ea8-62f3-4716-9451-0e13567ec2a1","Type":"ContainerDied","Data":"2bc34dab4f37d7b6429a87926db0d3a5178ff268821d2ee975bfe47cb007e77b"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.833830 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8pwcc" event={"ID":"98986ea8-62f3-4716-9451-0e13567ec2a1","Type":"ContainerStarted","Data":"cb09760039a9112dfda2f514c6cc6d916cb55c3c695ec127a1cd6546c15b55a8"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.835479 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a9f-account-create-update-zkbvj" event={"ID":"62fbb159-dc72-4c34-b2b7-5be6be4df981","Type":"ContainerStarted","Data":"792c9fae56b3faf29df0bfe7bb192d950ab990e8d21594ce52765083cb10c12e"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.835505 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a9f-account-create-update-zkbvj" event={"ID":"62fbb159-dc72-4c34-b2b7-5be6be4df981","Type":"ContainerStarted","Data":"cdab6e776d028e9251c9333022bcb3bff90331c0dec32cedbd959678ebc24028"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.838004 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-22a6-account-create-update-59kzd" event={"ID":"563516b7-0256-4c05-b1d1-3aa03d692afb","Type":"ContainerStarted","Data":"e2ff0ec9f064c9873b71344fa59a44b2ef666d7ccd24dbe878aa2ede8a23585c"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.838030 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-22a6-account-create-update-59kzd" event={"ID":"563516b7-0256-4c05-b1d1-3aa03d692afb","Type":"ContainerStarted","Data":"7d69a7884cd7efe94de2ea93b06606bf6e99299116b61e5a4762af1a31d75436"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.842178 4793 generic.go:334] "Generic (PLEG): container finished" podID="6d0f274e-c187-4f1a-aa78-508b1761f9fb" containerID="e076400efeb8dc1f3b157eb928b1925e404de84a86497e6441e959675b9ddf99" exitCode=0 Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.842267 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gbcdm" event={"ID":"6d0f274e-c187-4f1a-aa78-508b1761f9fb","Type":"ContainerDied","Data":"e076400efeb8dc1f3b157eb928b1925e404de84a86497e6441e959675b9ddf99"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.842292 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gbcdm" event={"ID":"6d0f274e-c187-4f1a-aa78-508b1761f9fb","Type":"ContainerStarted","Data":"1039ce097a065ceb7f6cbd6b3b5d6e73401a103ef33341c42a54ecdb3c2e9be8"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.874124 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-ff11-account-create-update-p5nhq" podStartSLOduration=5.874111 podStartE2EDuration="5.874111s" podCreationTimestamp="2026-01-30 14:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:14.848440201 +0000 UTC m=+1325.549788702" watchObservedRunningTime="2026-01-30 14:05:14.874111 +0000 UTC m=+1325.575459491" Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.892354 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-q459t" podStartSLOduration=2.5925016100000002 podStartE2EDuration="8.892337757s" podCreationTimestamp="2026-01-30 14:05:06 +0000 UTC" firstStartedPulling="2026-01-30 14:05:07.247913327 +0000 UTC m=+1317.949261808" lastFinishedPulling="2026-01-30 14:05:13.547749474 +0000 UTC m=+1324.249097955" observedRunningTime="2026-01-30 14:05:14.871572218 +0000 UTC m=+1325.572920709" watchObservedRunningTime="2026-01-30 14:05:14.892337757 +0000 UTC m=+1325.593686248" Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.935063 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-3a9f-account-create-update-zkbvj" podStartSLOduration=4.935031743 podStartE2EDuration="4.935031743s" podCreationTimestamp="2026-01-30 14:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:14.89123395 +0000 UTC m=+1325.592582441" watchObservedRunningTime="2026-01-30 14:05:14.935031743 +0000 UTC m=+1325.636380234" Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.968886 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-22a6-account-create-update-59kzd" podStartSLOduration=5.968845433 podStartE2EDuration="5.968845433s" podCreationTimestamp="2026-01-30 14:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:14.964471725 +0000 UTC m=+1325.665820216" watchObservedRunningTime="2026-01-30 14:05:14.968845433 +0000 UTC m=+1325.670193924" Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.853879 4793 generic.go:334] "Generic (PLEG): container finished" podID="62fbb159-dc72-4c34-b2b7-5be6be4df981" containerID="792c9fae56b3faf29df0bfe7bb192d950ab990e8d21594ce52765083cb10c12e" exitCode=0 Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.854360 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a9f-account-create-update-zkbvj" event={"ID":"62fbb159-dc72-4c34-b2b7-5be6be4df981","Type":"ContainerDied","Data":"792c9fae56b3faf29df0bfe7bb192d950ab990e8d21594ce52765083cb10c12e"} Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.856201 4793 generic.go:334] "Generic (PLEG): container finished" podID="563516b7-0256-4c05-b1d1-3aa03d692afb" containerID="e2ff0ec9f064c9873b71344fa59a44b2ef666d7ccd24dbe878aa2ede8a23585c" exitCode=0 Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.856360 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-22a6-account-create-update-59kzd" event={"ID":"563516b7-0256-4c05-b1d1-3aa03d692afb","Type":"ContainerDied","Data":"e2ff0ec9f064c9873b71344fa59a44b2ef666d7ccd24dbe878aa2ede8a23585c"} Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.863505 4793 generic.go:334] "Generic (PLEG): container finished" podID="f81f2e71-1a70-491f-ba0c-ad1a456345c8" containerID="43a04a7b0ede88204c3ce58512e165ac71ea34ba165695393273ca8c2ab37053" exitCode=0 Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.863716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ff11-account-create-update-p5nhq" event={"ID":"f81f2e71-1a70-491f-ba0c-ad1a456345c8","Type":"ContainerDied","Data":"43a04a7b0ede88204c3ce58512e165ac71ea34ba165695393273ca8c2ab37053"} Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.989356 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-x9wgt"] Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.004436 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-x9wgt"] Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.073396 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-r6w5v"] Jan 30 14:05:16 crc kubenswrapper[4793]: E0130 14:05:16.073758 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd3bf73-817a-402e-866c-8a91e0bc2428" containerName="mariadb-account-create-update" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.073791 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd3bf73-817a-402e-866c-8a91e0bc2428" containerName="mariadb-account-create-update" Jan 30 14:05:16 crc kubenswrapper[4793]: E0130 14:05:16.073812 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerName="dnsmasq-dns" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.073818 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerName="dnsmasq-dns" Jan 30 14:05:16 crc kubenswrapper[4793]: E0130 14:05:16.073832 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerName="init" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.073839 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerName="init" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.074020 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerName="dnsmasq-dns" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.074058 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd3bf73-817a-402e-866c-8a91e0bc2428" containerName="mariadb-account-create-update" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.074575 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.077696 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.083682 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r6w5v"] Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.219769 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5fc335-85d3-41d9-af0a-d0e3aede352b-operator-scripts\") pod \"root-account-create-update-r6w5v\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.223504 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmmjt\" (UniqueName: \"kubernetes.io/projected/8c5fc335-85d3-41d9-af0a-d0e3aede352b-kube-api-access-bmmjt\") pod \"root-account-create-update-r6w5v\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.294038 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.325166 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5fc335-85d3-41d9-af0a-d0e3aede352b-operator-scripts\") pod \"root-account-create-update-r6w5v\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.325257 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmmjt\" (UniqueName: \"kubernetes.io/projected/8c5fc335-85d3-41d9-af0a-d0e3aede352b-kube-api-access-bmmjt\") pod \"root-account-create-update-r6w5v\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.327862 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5fc335-85d3-41d9-af0a-d0e3aede352b-operator-scripts\") pod \"root-account-create-update-r6w5v\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.369459 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmmjt\" (UniqueName: \"kubernetes.io/projected/8c5fc335-85d3-41d9-af0a-d0e3aede352b-kube-api-access-bmmjt\") pod \"root-account-create-update-r6w5v\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.393782 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.415577 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd3bf73-817a-402e-866c-8a91e0bc2428" path="/var/lib/kubelet/pods/1fd3bf73-817a-402e-866c-8a91e0bc2428/volumes" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.426570 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr8tg\" (UniqueName: \"kubernetes.io/projected/b3f03641-1e63-4c88-a1f4-f58cf0d81883-kube-api-access-pr8tg\") pod \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.426661 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f03641-1e63-4c88-a1f4-f58cf0d81883-operator-scripts\") pod \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.427242 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3f03641-1e63-4c88-a1f4-f58cf0d81883-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3f03641-1e63-4c88-a1f4-f58cf0d81883" (UID: "b3f03641-1e63-4c88-a1f4-f58cf0d81883"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.427859 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f03641-1e63-4c88-a1f4-f58cf0d81883-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.430613 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f03641-1e63-4c88-a1f4-f58cf0d81883-kube-api-access-pr8tg" (OuterVolumeSpecName: "kube-api-access-pr8tg") pod "b3f03641-1e63-4c88-a1f4-f58cf0d81883" (UID: "b3f03641-1e63-4c88-a1f4-f58cf0d81883"). InnerVolumeSpecName "kube-api-access-pr8tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.463613 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.469498 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.529827 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr8tg\" (UniqueName: \"kubernetes.io/projected/b3f03641-1e63-4c88-a1f4-f58cf0d81883-kube-api-access-pr8tg\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.631233 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98986ea8-62f3-4716-9451-0e13567ec2a1-operator-scripts\") pod \"98986ea8-62f3-4716-9451-0e13567ec2a1\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.631313 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv8gx\" (UniqueName: \"kubernetes.io/projected/98986ea8-62f3-4716-9451-0e13567ec2a1-kube-api-access-bv8gx\") pod \"98986ea8-62f3-4716-9451-0e13567ec2a1\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.631454 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d0f274e-c187-4f1a-aa78-508b1761f9fb-operator-scripts\") pod \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.631527 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfwd6\" (UniqueName: \"kubernetes.io/projected/6d0f274e-c187-4f1a-aa78-508b1761f9fb-kube-api-access-tfwd6\") pod \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.632535 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98986ea8-62f3-4716-9451-0e13567ec2a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98986ea8-62f3-4716-9451-0e13567ec2a1" (UID: "98986ea8-62f3-4716-9451-0e13567ec2a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.632680 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d0f274e-c187-4f1a-aa78-508b1761f9fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d0f274e-c187-4f1a-aa78-508b1761f9fb" (UID: "6d0f274e-c187-4f1a-aa78-508b1761f9fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.635305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d0f274e-c187-4f1a-aa78-508b1761f9fb-kube-api-access-tfwd6" (OuterVolumeSpecName: "kube-api-access-tfwd6") pod "6d0f274e-c187-4f1a-aa78-508b1761f9fb" (UID: "6d0f274e-c187-4f1a-aa78-508b1761f9fb"). InnerVolumeSpecName "kube-api-access-tfwd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.638903 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98986ea8-62f3-4716-9451-0e13567ec2a1-kube-api-access-bv8gx" (OuterVolumeSpecName: "kube-api-access-bv8gx") pod "98986ea8-62f3-4716-9451-0e13567ec2a1" (UID: "98986ea8-62f3-4716-9451-0e13567ec2a1"). InnerVolumeSpecName "kube-api-access-bv8gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.734025 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d0f274e-c187-4f1a-aa78-508b1761f9fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.734227 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfwd6\" (UniqueName: \"kubernetes.io/projected/6d0f274e-c187-4f1a-aa78-508b1761f9fb-kube-api-access-tfwd6\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.734237 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98986ea8-62f3-4716-9451-0e13567ec2a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.734248 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv8gx\" (UniqueName: \"kubernetes.io/projected/98986ea8-62f3-4716-9451-0e13567ec2a1-kube-api-access-bv8gx\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.161308 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.165150 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8pwcc" event={"ID":"98986ea8-62f3-4716-9451-0e13567ec2a1","Type":"ContainerDied","Data":"cb09760039a9112dfda2f514c6cc6d916cb55c3c695ec127a1cd6546c15b55a8"} Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.165255 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb09760039a9112dfda2f514c6cc6d916cb55c3c695ec127a1cd6546c15b55a8" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.167545 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.167556 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tq6pw" event={"ID":"b3f03641-1e63-4c88-a1f4-f58cf0d81883","Type":"ContainerDied","Data":"a9e447eeda31cacf6f4b15b396de8b08fe6fa521839c2bcdccd64834364aae1e"} Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.167880 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e447eeda31cacf6f4b15b396de8b08fe6fa521839c2bcdccd64834364aae1e" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.169568 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.172074 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gbcdm" event={"ID":"6d0f274e-c187-4f1a-aa78-508b1761f9fb","Type":"ContainerDied","Data":"1039ce097a065ceb7f6cbd6b3b5d6e73401a103ef33341c42a54ecdb3c2e9be8"} Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.172112 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1039ce097a065ceb7f6cbd6b3b5d6e73401a103ef33341c42a54ecdb3c2e9be8" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.205490 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r6w5v"] Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.659006 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.673642 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.673724 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:18 crc kubenswrapper[4793]: E0130 14:05:17.799824 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c5fc335_85d3_41d9_af0a_d0e3aede352b.slice/crio-conmon-0a03fc4fb64bbc55f9e83e2df3c5192020b95575ac83335c13e52269467122b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c5fc335_85d3_41d9_af0a_d0e3aede352b.slice/crio-0a03fc4fb64bbc55f9e83e2df3c5192020b95575ac83335c13e52269467122b8.scope\": RecentStats: unable to find data in memory cache]" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.850662 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97zzn\" (UniqueName: \"kubernetes.io/projected/62fbb159-dc72-4c34-b2b7-5be6be4df981-kube-api-access-97zzn\") pod \"62fbb159-dc72-4c34-b2b7-5be6be4df981\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.850746 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81f2e71-1a70-491f-ba0c-ad1a456345c8-operator-scripts\") pod \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.850793 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm626\" (UniqueName: \"kubernetes.io/projected/f81f2e71-1a70-491f-ba0c-ad1a456345c8-kube-api-access-vm626\") pod \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.850891 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5gw6\" (UniqueName: \"kubernetes.io/projected/563516b7-0256-4c05-b1d1-3aa03d692afb-kube-api-access-t5gw6\") pod \"563516b7-0256-4c05-b1d1-3aa03d692afb\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.850944 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62fbb159-dc72-4c34-b2b7-5be6be4df981-operator-scripts\") pod \"62fbb159-dc72-4c34-b2b7-5be6be4df981\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.850975 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/563516b7-0256-4c05-b1d1-3aa03d692afb-operator-scripts\") pod \"563516b7-0256-4c05-b1d1-3aa03d692afb\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.851934 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62fbb159-dc72-4c34-b2b7-5be6be4df981-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62fbb159-dc72-4c34-b2b7-5be6be4df981" (UID: "62fbb159-dc72-4c34-b2b7-5be6be4df981"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.852122 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62fbb159-dc72-4c34-b2b7-5be6be4df981-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.852114 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f81f2e71-1a70-491f-ba0c-ad1a456345c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f81f2e71-1a70-491f-ba0c-ad1a456345c8" (UID: "f81f2e71-1a70-491f-ba0c-ad1a456345c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.852409 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/563516b7-0256-4c05-b1d1-3aa03d692afb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "563516b7-0256-4c05-b1d1-3aa03d692afb" (UID: "563516b7-0256-4c05-b1d1-3aa03d692afb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.856758 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/563516b7-0256-4c05-b1d1-3aa03d692afb-kube-api-access-t5gw6" (OuterVolumeSpecName: "kube-api-access-t5gw6") pod "563516b7-0256-4c05-b1d1-3aa03d692afb" (UID: "563516b7-0256-4c05-b1d1-3aa03d692afb"). InnerVolumeSpecName "kube-api-access-t5gw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.856859 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62fbb159-dc72-4c34-b2b7-5be6be4df981-kube-api-access-97zzn" (OuterVolumeSpecName: "kube-api-access-97zzn") pod "62fbb159-dc72-4c34-b2b7-5be6be4df981" (UID: "62fbb159-dc72-4c34-b2b7-5be6be4df981"). InnerVolumeSpecName "kube-api-access-97zzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.857448 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f81f2e71-1a70-491f-ba0c-ad1a456345c8-kube-api-access-vm626" (OuterVolumeSpecName: "kube-api-access-vm626") pod "f81f2e71-1a70-491f-ba0c-ad1a456345c8" (UID: "f81f2e71-1a70-491f-ba0c-ad1a456345c8"). InnerVolumeSpecName "kube-api-access-vm626". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.953155 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81f2e71-1a70-491f-ba0c-ad1a456345c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.953186 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm626\" (UniqueName: \"kubernetes.io/projected/f81f2e71-1a70-491f-ba0c-ad1a456345c8-kube-api-access-vm626\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.953201 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5gw6\" (UniqueName: \"kubernetes.io/projected/563516b7-0256-4c05-b1d1-3aa03d692afb-kube-api-access-t5gw6\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.953210 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/563516b7-0256-4c05-b1d1-3aa03d692afb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.953220 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97zzn\" (UniqueName: \"kubernetes.io/projected/62fbb159-dc72-4c34-b2b7-5be6be4df981-kube-api-access-97zzn\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.180686 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-22a6-account-create-update-59kzd" event={"ID":"563516b7-0256-4c05-b1d1-3aa03d692afb","Type":"ContainerDied","Data":"7d69a7884cd7efe94de2ea93b06606bf6e99299116b61e5a4762af1a31d75436"} Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.180717 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d69a7884cd7efe94de2ea93b06606bf6e99299116b61e5a4762af1a31d75436" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.180790 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.186192 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ff11-account-create-update-p5nhq" event={"ID":"f81f2e71-1a70-491f-ba0c-ad1a456345c8","Type":"ContainerDied","Data":"1635e22d747e1e9ecdb13fd83e4f66247ad344b78ffe852aa12ec1f91c0d069e"} Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.186222 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1635e22d747e1e9ecdb13fd83e4f66247ad344b78ffe852aa12ec1f91c0d069e" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.186297 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.191802 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a9f-account-create-update-zkbvj" event={"ID":"62fbb159-dc72-4c34-b2b7-5be6be4df981","Type":"ContainerDied","Data":"cdab6e776d028e9251c9333022bcb3bff90331c0dec32cedbd959678ebc24028"} Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.191846 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdab6e776d028e9251c9333022bcb3bff90331c0dec32cedbd959678ebc24028" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.191962 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.203019 4793 generic.go:334] "Generic (PLEG): container finished" podID="8c5fc335-85d3-41d9-af0a-d0e3aede352b" containerID="0a03fc4fb64bbc55f9e83e2df3c5192020b95575ac83335c13e52269467122b8" exitCode=0 Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.203062 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r6w5v" event={"ID":"8c5fc335-85d3-41d9-af0a-d0e3aede352b","Type":"ContainerDied","Data":"0a03fc4fb64bbc55f9e83e2df3c5192020b95575ac83335c13e52269467122b8"} Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.203159 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r6w5v" event={"ID":"8c5fc335-85d3-41d9-af0a-d0e3aede352b","Type":"ContainerStarted","Data":"ef7e3d86992b0608a1f5c882b1bed3724444b7f930e935580cc522ebda3d7a72"} Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.674975 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:18 crc kubenswrapper[4793]: E0130 14:05:18.675200 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 14:05:18 crc kubenswrapper[4793]: E0130 14:05:18.675396 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 14:05:18 crc kubenswrapper[4793]: E0130 14:05:18.675447 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift podName:76182868-5b55-403e-a2be-0c6879e9a2e0 nodeName:}" failed. No retries permitted until 2026-01-30 14:05:34.675430503 +0000 UTC m=+1345.376778994 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift") pod "swift-storage-0" (UID: "76182868-5b55-403e-a2be-0c6879e9a2e0") : configmap "swift-ring-files" not found Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.214015 4793 generic.go:334] "Generic (PLEG): container finished" podID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerID="d616170562eeb4ba00ef47dc4bae339cb080a28d5310b1ec237e9ad217b38991" exitCode=0 Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.214079 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a4cd276-23a5-4acb-bb1b-41470a11c945","Type":"ContainerDied","Data":"d616170562eeb4ba00ef47dc4bae339cb080a28d5310b1ec237e9ad217b38991"} Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.216679 4793 generic.go:334] "Generic (PLEG): container finished" podID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerID="06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48" exitCode=0 Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.216792 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ab4371b-53c0-41a1-9561-0c02f936c7a7","Type":"ContainerDied","Data":"06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48"} Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.595540 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.689960 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmmjt\" (UniqueName: \"kubernetes.io/projected/8c5fc335-85d3-41d9-af0a-d0e3aede352b-kube-api-access-bmmjt\") pod \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.690114 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5fc335-85d3-41d9-af0a-d0e3aede352b-operator-scripts\") pod \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.690483 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c5fc335-85d3-41d9-af0a-d0e3aede352b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c5fc335-85d3-41d9-af0a-d0e3aede352b" (UID: "8c5fc335-85d3-41d9-af0a-d0e3aede352b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.690632 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5fc335-85d3-41d9-af0a-d0e3aede352b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.695549 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c5fc335-85d3-41d9-af0a-d0e3aede352b-kube-api-access-bmmjt" (OuterVolumeSpecName: "kube-api-access-bmmjt") pod "8c5fc335-85d3-41d9-af0a-d0e3aede352b" (UID: "8c5fc335-85d3-41d9-af0a-d0e3aede352b"). InnerVolumeSpecName "kube-api-access-bmmjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.744714 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-btxs9"] Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745007 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98986ea8-62f3-4716-9451-0e13567ec2a1" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745025 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="98986ea8-62f3-4716-9451-0e13567ec2a1" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745033 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f03641-1e63-4c88-a1f4-f58cf0d81883" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745040 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f03641-1e63-4c88-a1f4-f58cf0d81883" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745068 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81f2e71-1a70-491f-ba0c-ad1a456345c8" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745075 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81f2e71-1a70-491f-ba0c-ad1a456345c8" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745087 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0f274e-c187-4f1a-aa78-508b1761f9fb" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745094 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0f274e-c187-4f1a-aa78-508b1761f9fb" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745110 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fbb159-dc72-4c34-b2b7-5be6be4df981" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745116 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fbb159-dc72-4c34-b2b7-5be6be4df981" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745127 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="563516b7-0256-4c05-b1d1-3aa03d692afb" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745133 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="563516b7-0256-4c05-b1d1-3aa03d692afb" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745150 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5fc335-85d3-41d9-af0a-d0e3aede352b" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745156 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5fc335-85d3-41d9-af0a-d0e3aede352b" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745310 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f03641-1e63-4c88-a1f4-f58cf0d81883" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745324 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5fc335-85d3-41d9-af0a-d0e3aede352b" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745333 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d0f274e-c187-4f1a-aa78-508b1761f9fb" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745341 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="563516b7-0256-4c05-b1d1-3aa03d692afb" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745351 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fbb159-dc72-4c34-b2b7-5be6be4df981" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745360 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81f2e71-1a70-491f-ba0c-ad1a456345c8" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745368 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="98986ea8-62f3-4716-9451-0e13567ec2a1" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745815 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.754631 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.758844 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jb79g" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.761421 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-btxs9"] Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.792370 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmmjt\" (UniqueName: \"kubernetes.io/projected/8c5fc335-85d3-41d9-af0a-d0e3aede352b-kube-api-access-bmmjt\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.893345 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-config-data\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.893463 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-db-sync-config-data\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.893517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bt5j\" (UniqueName: \"kubernetes.io/projected/2b977757-3d3e-48e5-a1e2-d31ebeda138e-kube-api-access-6bt5j\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.893542 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-combined-ca-bundle\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.994774 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bt5j\" (UniqueName: \"kubernetes.io/projected/2b977757-3d3e-48e5-a1e2-d31ebeda138e-kube-api-access-6bt5j\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.995030 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-combined-ca-bundle\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.995132 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-config-data\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.995187 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-db-sync-config-data\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.998566 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-db-sync-config-data\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.999322 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-config-data\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.999463 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-combined-ca-bundle\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.016637 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bt5j\" (UniqueName: \"kubernetes.io/projected/2b977757-3d3e-48e5-a1e2-d31ebeda138e-kube-api-access-6bt5j\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.059309 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.280567 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ab4371b-53c0-41a1-9561-0c02f936c7a7","Type":"ContainerStarted","Data":"ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa"} Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.281300 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.285384 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r6w5v" event={"ID":"8c5fc335-85d3-41d9-af0a-d0e3aede352b","Type":"ContainerDied","Data":"ef7e3d86992b0608a1f5c882b1bed3724444b7f930e935580cc522ebda3d7a72"} Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.285423 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef7e3d86992b0608a1f5c882b1bed3724444b7f930e935580cc522ebda3d7a72" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.285454 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.287674 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a4cd276-23a5-4acb-bb1b-41470a11c945","Type":"ContainerStarted","Data":"b985352acd3221df1cd541d3576c66285b247ac814efbffa0d9afc52e1848265"} Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.287982 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.327071 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.641938794 podStartE2EDuration="1m16.327031613s" podCreationTimestamp="2026-01-30 14:04:04 +0000 UTC" firstStartedPulling="2026-01-30 14:04:06.939217602 +0000 UTC m=+1257.640566093" lastFinishedPulling="2026-01-30 14:04:44.624310421 +0000 UTC m=+1295.325658912" observedRunningTime="2026-01-30 14:05:20.314065526 +0000 UTC m=+1331.015414027" watchObservedRunningTime="2026-01-30 14:05:20.327031613 +0000 UTC m=+1331.028380104" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.387094 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.744207081 podStartE2EDuration="1m16.387071735s" podCreationTimestamp="2026-01-30 14:04:04 +0000 UTC" firstStartedPulling="2026-01-30 14:04:07.063342615 +0000 UTC m=+1257.764691106" lastFinishedPulling="2026-01-30 14:04:44.706207269 +0000 UTC m=+1295.407555760" observedRunningTime="2026-01-30 14:05:20.352824146 +0000 UTC m=+1331.054172647" watchObservedRunningTime="2026-01-30 14:05:20.387071735 +0000 UTC m=+1331.088420226" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.431445 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.578435 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-btxs9"] Jan 30 14:05:20 crc kubenswrapper[4793]: W0130 14:05:20.589325 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b977757_3d3e_48e5_a1e2_d31ebeda138e.slice/crio-7ef1978da215da441ac8cf72de6c6774bfd0f063eea75236ae6171402912d11b WatchSource:0}: Error finding container 7ef1978da215da441ac8cf72de6c6774bfd0f063eea75236ae6171402912d11b: Status 404 returned error can't find the container with id 7ef1978da215da441ac8cf72de6c6774bfd0f063eea75236ae6171402912d11b Jan 30 14:05:21 crc kubenswrapper[4793]: I0130 14:05:21.296322 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-btxs9" event={"ID":"2b977757-3d3e-48e5-a1e2-d31ebeda138e","Type":"ContainerStarted","Data":"7ef1978da215da441ac8cf72de6c6774bfd0f063eea75236ae6171402912d11b"} Jan 30 14:05:22 crc kubenswrapper[4793]: I0130 14:05:22.198618 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-r6w5v"] Jan 30 14:05:22 crc kubenswrapper[4793]: I0130 14:05:22.207264 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-r6w5v"] Jan 30 14:05:22 crc kubenswrapper[4793]: I0130 14:05:22.307452 4793 generic.go:334] "Generic (PLEG): container finished" podID="50011731-846f-4e86-8664-f9c797dc64ed" containerID="a1b8fa0ad1007024e2a758d432cfe8f804db4960d86814b080a404a5d1c5e7dd" exitCode=0 Jan 30 14:05:22 crc kubenswrapper[4793]: I0130 14:05:22.307499 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q459t" event={"ID":"50011731-846f-4e86-8664-f9c797dc64ed","Type":"ContainerDied","Data":"a1b8fa0ad1007024e2a758d432cfe8f804db4960d86814b080a404a5d1c5e7dd"} Jan 30 14:05:22 crc kubenswrapper[4793]: I0130 14:05:22.411461 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c5fc335-85d3-41d9-af0a-d0e3aede352b" path="/var/lib/kubelet/pods/8c5fc335-85d3-41d9-af0a-d0e3aede352b/volumes" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.738603 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.854279 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-scripts\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.854321 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-combined-ca-bundle\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.854343 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4s46\" (UniqueName: \"kubernetes.io/projected/50011731-846f-4e86-8664-f9c797dc64ed-kube-api-access-h4s46\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.854393 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-swiftconf\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.854450 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-ring-data-devices\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.855128 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/50011731-846f-4e86-8664-f9c797dc64ed-etc-swift\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.855237 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-dispersionconf\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.855366 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.855796 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50011731-846f-4e86-8664-f9c797dc64ed-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.856059 4793 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.856085 4793 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/50011731-846f-4e86-8664-f9c797dc64ed-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.868259 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.877841 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-scripts" (OuterVolumeSpecName: "scripts") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.881294 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50011731-846f-4e86-8664-f9c797dc64ed-kube-api-access-h4s46" (OuterVolumeSpecName: "kube-api-access-h4s46") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "kube-api-access-h4s46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.886951 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.909845 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.957513 4793 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.957541 4793 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.957550 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.957559 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.957573 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4s46\" (UniqueName: \"kubernetes.io/projected/50011731-846f-4e86-8664-f9c797dc64ed-kube-api-access-h4s46\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:24 crc kubenswrapper[4793]: I0130 14:05:24.332460 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q459t" event={"ID":"50011731-846f-4e86-8664-f9c797dc64ed","Type":"ContainerDied","Data":"dfcd68a21a6ccc777d3dfdabb9d0541bc18ef4395d6201dad4b19a23446f3679"} Jan 30 14:05:24 crc kubenswrapper[4793]: I0130 14:05:24.332507 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfcd68a21a6ccc777d3dfdabb9d0541bc18ef4395d6201dad4b19a23446f3679" Jan 30 14:05:24 crc kubenswrapper[4793]: I0130 14:05:24.332588 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:24 crc kubenswrapper[4793]: I0130 14:05:24.638843 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:05:24 crc kubenswrapper[4793]: I0130 14:05:24.696750 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-45fd5" podUID="230700ff-5087-4d0d-9d93-90b597d2ef72" containerName="ovn-controller" probeResult="failure" output=< Jan 30 14:05:24 crc kubenswrapper[4793]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 14:05:24 crc kubenswrapper[4793]: > Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.218257 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ktlrj"] Jan 30 14:05:27 crc kubenswrapper[4793]: E0130 14:05:27.218534 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50011731-846f-4e86-8664-f9c797dc64ed" containerName="swift-ring-rebalance" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.218545 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="50011731-846f-4e86-8664-f9c797dc64ed" containerName="swift-ring-rebalance" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.218698 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="50011731-846f-4e86-8664-f9c797dc64ed" containerName="swift-ring-rebalance" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.219164 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.222848 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.241838 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ktlrj"] Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.412072 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvdfr\" (UniqueName: \"kubernetes.io/projected/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-kube-api-access-kvdfr\") pod \"root-account-create-update-ktlrj\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.412225 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-operator-scripts\") pod \"root-account-create-update-ktlrj\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.514562 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-operator-scripts\") pod \"root-account-create-update-ktlrj\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.515004 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvdfr\" (UniqueName: \"kubernetes.io/projected/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-kube-api-access-kvdfr\") pod \"root-account-create-update-ktlrj\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.515406 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-operator-scripts\") pod \"root-account-create-update-ktlrj\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.536605 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvdfr\" (UniqueName: \"kubernetes.io/projected/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-kube-api-access-kvdfr\") pod \"root-account-create-update-ktlrj\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.602144 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.606402 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.707596 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-45fd5" podUID="230700ff-5087-4d0d-9d93-90b597d2ef72" containerName="ovn-controller" probeResult="failure" output=< Jan 30 14:05:29 crc kubenswrapper[4793]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 14:05:29 crc kubenswrapper[4793]: > Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.847778 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-45fd5-config-7cmw2"] Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.848785 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.851104 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.852958 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-additional-scripts\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.853063 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-scripts\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.853086 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-log-ovn\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.853105 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run-ovn\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.853128 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.853157 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtd99\" (UniqueName: \"kubernetes.io/projected/afab5fb9-07ec-48e9-b50b-28e47d11942b-kube-api-access-rtd99\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.876850 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-45fd5-config-7cmw2"] Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.954995 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-log-ovn\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955071 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run-ovn\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955096 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955154 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtd99\" (UniqueName: \"kubernetes.io/projected/afab5fb9-07ec-48e9-b50b-28e47d11942b-kube-api-access-rtd99\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955232 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-additional-scripts\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955314 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-scripts\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955714 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run-ovn\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955737 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-log-ovn\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955968 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.956552 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-additional-scripts\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.960196 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-scripts\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.991348 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtd99\" (UniqueName: \"kubernetes.io/projected/afab5fb9-07ec-48e9-b50b-28e47d11942b-kube-api-access-rtd99\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:30 crc kubenswrapper[4793]: I0130 14:05:30.178896 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:33 crc kubenswrapper[4793]: I0130 14:05:33.813333 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-45fd5-config-7cmw2"] Jan 30 14:05:33 crc kubenswrapper[4793]: I0130 14:05:33.929244 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ktlrj"] Jan 30 14:05:33 crc kubenswrapper[4793]: W0130 14:05:33.929667 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec365c0b_f8d9_4b59_bb89_a583d1eb7257.slice/crio-6923621ecca2ecd3d9e485cf5299f11163ef081541fe789ba548d0113b594a43 WatchSource:0}: Error finding container 6923621ecca2ecd3d9e485cf5299f11163ef081541fe789ba548d0113b594a43: Status 404 returned error can't find the container with id 6923621ecca2ecd3d9e485cf5299f11163ef081541fe789ba548d0113b594a43 Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.420809 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-btxs9" event={"ID":"2b977757-3d3e-48e5-a1e2-d31ebeda138e","Type":"ContainerStarted","Data":"aba07025654ae635089a8f296dddf9cfb274c709f33abf63aa5399408783166c"} Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.422677 4793 generic.go:334] "Generic (PLEG): container finished" podID="afab5fb9-07ec-48e9-b50b-28e47d11942b" containerID="915b433bd8f492e1285f7731f190606a27443ef65efaea3a89e0a1143cdf8065" exitCode=0 Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.422763 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-45fd5-config-7cmw2" event={"ID":"afab5fb9-07ec-48e9-b50b-28e47d11942b","Type":"ContainerDied","Data":"915b433bd8f492e1285f7731f190606a27443ef65efaea3a89e0a1143cdf8065"} Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.422795 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-45fd5-config-7cmw2" event={"ID":"afab5fb9-07ec-48e9-b50b-28e47d11942b","Type":"ContainerStarted","Data":"3183ceacb40c43d1a8e662c19d9461e4ddb8e55c500e70d8862604cd360f4f8b"} Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.425028 4793 generic.go:334] "Generic (PLEG): container finished" podID="ec365c0b-f8d9-4b59-bb89-a583d1eb7257" containerID="49617378d146339946d69a33ebd155e69d9eb4e257e62cbaa6d931330bc913ba" exitCode=0 Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.425124 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ktlrj" event={"ID":"ec365c0b-f8d9-4b59-bb89-a583d1eb7257","Type":"ContainerDied","Data":"49617378d146339946d69a33ebd155e69d9eb4e257e62cbaa6d931330bc913ba"} Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.425150 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ktlrj" event={"ID":"ec365c0b-f8d9-4b59-bb89-a583d1eb7257","Type":"ContainerStarted","Data":"6923621ecca2ecd3d9e485cf5299f11163ef081541fe789ba548d0113b594a43"} Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.449907 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-btxs9" podStartSLOduration=2.6111264050000003 podStartE2EDuration="15.449886529s" podCreationTimestamp="2026-01-30 14:05:19 +0000 UTC" firstStartedPulling="2026-01-30 14:05:20.591851456 +0000 UTC m=+1331.293199947" lastFinishedPulling="2026-01-30 14:05:33.43061156 +0000 UTC m=+1344.131960071" observedRunningTime="2026-01-30 14:05:34.440629902 +0000 UTC m=+1345.141978403" watchObservedRunningTime="2026-01-30 14:05:34.449886529 +0000 UTC m=+1345.151235020" Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.689255 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-45fd5" Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.751400 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.759480 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.057954 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.443465 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.833126 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.840439 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971099 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-additional-scripts\") pod \"afab5fb9-07ec-48e9-b50b-28e47d11942b\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971223 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run-ovn\") pod \"afab5fb9-07ec-48e9-b50b-28e47d11942b\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971252 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-scripts\") pod \"afab5fb9-07ec-48e9-b50b-28e47d11942b\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971310 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "afab5fb9-07ec-48e9-b50b-28e47d11942b" (UID: "afab5fb9-07ec-48e9-b50b-28e47d11942b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971375 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-log-ovn\") pod \"afab5fb9-07ec-48e9-b50b-28e47d11942b\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971452 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvdfr\" (UniqueName: \"kubernetes.io/projected/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-kube-api-access-kvdfr\") pod \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971471 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtd99\" (UniqueName: \"kubernetes.io/projected/afab5fb9-07ec-48e9-b50b-28e47d11942b-kube-api-access-rtd99\") pod \"afab5fb9-07ec-48e9-b50b-28e47d11942b\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972202 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-operator-scripts\") pod \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971403 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "afab5fb9-07ec-48e9-b50b-28e47d11942b" (UID: "afab5fb9-07ec-48e9-b50b-28e47d11942b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971997 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "afab5fb9-07ec-48e9-b50b-28e47d11942b" (UID: "afab5fb9-07ec-48e9-b50b-28e47d11942b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972251 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-scripts" (OuterVolumeSpecName: "scripts") pod "afab5fb9-07ec-48e9-b50b-28e47d11942b" (UID: "afab5fb9-07ec-48e9-b50b-28e47d11942b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972687 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec365c0b-f8d9-4b59-bb89-a583d1eb7257" (UID: "ec365c0b-f8d9-4b59-bb89-a583d1eb7257"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972723 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run\") pod \"afab5fb9-07ec-48e9-b50b-28e47d11942b\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972784 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run" (OuterVolumeSpecName: "var-run") pod "afab5fb9-07ec-48e9-b50b-28e47d11942b" (UID: "afab5fb9-07ec-48e9-b50b-28e47d11942b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972968 4793 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972984 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972995 4793 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.973004 4793 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.973012 4793 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.973020 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.976520 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afab5fb9-07ec-48e9-b50b-28e47d11942b-kube-api-access-rtd99" (OuterVolumeSpecName: "kube-api-access-rtd99") pod "afab5fb9-07ec-48e9-b50b-28e47d11942b" (UID: "afab5fb9-07ec-48e9-b50b-28e47d11942b"). InnerVolumeSpecName "kube-api-access-rtd99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.976614 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-kube-api-access-kvdfr" (OuterVolumeSpecName: "kube-api-access-kvdfr") pod "ec365c0b-f8d9-4b59-bb89-a583d1eb7257" (UID: "ec365c0b-f8d9-4b59-bb89-a583d1eb7257"). InnerVolumeSpecName "kube-api-access-kvdfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.074430 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvdfr\" (UniqueName: \"kubernetes.io/projected/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-kube-api-access-kvdfr\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.074462 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtd99\" (UniqueName: \"kubernetes.io/projected/afab5fb9-07ec-48e9-b50b-28e47d11942b-kube-api-access-rtd99\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.081262 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.238007 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.455726 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-gvh75"] Jan 30 14:05:36 crc kubenswrapper[4793]: E0130 14:05:36.456042 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec365c0b-f8d9-4b59-bb89-a583d1eb7257" containerName="mariadb-account-create-update" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.456073 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec365c0b-f8d9-4b59-bb89-a583d1eb7257" containerName="mariadb-account-create-update" Jan 30 14:05:36 crc kubenswrapper[4793]: E0130 14:05:36.456085 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afab5fb9-07ec-48e9-b50b-28e47d11942b" containerName="ovn-config" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.456093 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="afab5fb9-07ec-48e9-b50b-28e47d11942b" containerName="ovn-config" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.456258 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec365c0b-f8d9-4b59-bb89-a583d1eb7257" containerName="mariadb-account-create-update" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.456279 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="afab5fb9-07ec-48e9-b50b-28e47d11942b" containerName="ovn-config" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.456752 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.464980 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"c2c4fd28411dc4300f936e163f6ecb733dff5d088151b768ba5cc48730783c5f"} Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.475322 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-45fd5-config-7cmw2" event={"ID":"afab5fb9-07ec-48e9-b50b-28e47d11942b","Type":"ContainerDied","Data":"3183ceacb40c43d1a8e662c19d9461e4ddb8e55c500e70d8862604cd360f4f8b"} Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.475358 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3183ceacb40c43d1a8e662c19d9461e4ddb8e55c500e70d8862604cd360f4f8b" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.475416 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.486876 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ktlrj" event={"ID":"ec365c0b-f8d9-4b59-bb89-a583d1eb7257","Type":"ContainerDied","Data":"6923621ecca2ecd3d9e485cf5299f11163ef081541fe789ba548d0113b594a43"} Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.487154 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6923621ecca2ecd3d9e485cf5299f11163ef081541fe789ba548d0113b594a43" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.487011 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.498889 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gvh75"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.583960 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-operator-scripts\") pod \"cinder-db-create-gvh75\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.584025 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjl27\" (UniqueName: \"kubernetes.io/projected/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-kube-api-access-gjl27\") pod \"cinder-db-create-gvh75\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.641933 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-29ee-account-create-update-56zfp"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.643113 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.645371 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.672608 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-29ee-account-create-update-56zfp"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.685298 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-operator-scripts\") pod \"cinder-db-create-gvh75\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.685343 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjl27\" (UniqueName: \"kubernetes.io/projected/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-kube-api-access-gjl27\") pod \"cinder-db-create-gvh75\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.686295 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-operator-scripts\") pod \"cinder-db-create-gvh75\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.729794 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjl27\" (UniqueName: \"kubernetes.io/projected/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-kube-api-access-gjl27\") pod \"cinder-db-create-gvh75\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.758620 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-89mld"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.759566 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-89mld" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.787603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw8kt\" (UniqueName: \"kubernetes.io/projected/2392ab6f-ca9b-4211-bd23-a243ce0ee554-kube-api-access-tw8kt\") pod \"barbican-29ee-account-create-update-56zfp\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.787708 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2392ab6f-ca9b-4211-bd23-a243ce0ee554-operator-scripts\") pod \"barbican-29ee-account-create-update-56zfp\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.789222 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-89mld"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.800760 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3f03-account-create-update-s5gbm"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.801826 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.806254 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.841269 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.881435 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3f03-account-create-update-s5gbm"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.889552 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2392ab6f-ca9b-4211-bd23-a243ce0ee554-operator-scripts\") pod \"barbican-29ee-account-create-update-56zfp\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.889605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5mhn\" (UniqueName: \"kubernetes.io/projected/13613099-2932-4476-8032-82095348fb10-kube-api-access-t5mhn\") pod \"barbican-db-create-89mld\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " pod="openstack/barbican-db-create-89mld" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.889634 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c07a623-53fe-44a2-9810-5d1137c659c3-operator-scripts\") pod \"cinder-3f03-account-create-update-s5gbm\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.889683 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsgsl\" (UniqueName: \"kubernetes.io/projected/6c07a623-53fe-44a2-9810-5d1137c659c3-kube-api-access-wsgsl\") pod \"cinder-3f03-account-create-update-s5gbm\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.889704 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13613099-2932-4476-8032-82095348fb10-operator-scripts\") pod \"barbican-db-create-89mld\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " pod="openstack/barbican-db-create-89mld" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.889722 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw8kt\" (UniqueName: \"kubernetes.io/projected/2392ab6f-ca9b-4211-bd23-a243ce0ee554-kube-api-access-tw8kt\") pod \"barbican-29ee-account-create-update-56zfp\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.890676 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2392ab6f-ca9b-4211-bd23-a243ce0ee554-operator-scripts\") pod \"barbican-29ee-account-create-update-56zfp\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.929991 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw8kt\" (UniqueName: \"kubernetes.io/projected/2392ab6f-ca9b-4211-bd23-a243ce0ee554-kube-api-access-tw8kt\") pod \"barbican-29ee-account-create-update-56zfp\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.959003 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.974122 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-t2ntm"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.975213 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.993441 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5mhn\" (UniqueName: \"kubernetes.io/projected/13613099-2932-4476-8032-82095348fb10-kube-api-access-t5mhn\") pod \"barbican-db-create-89mld\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " pod="openstack/barbican-db-create-89mld" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.993475 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c07a623-53fe-44a2-9810-5d1137c659c3-operator-scripts\") pod \"cinder-3f03-account-create-update-s5gbm\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.993530 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsgsl\" (UniqueName: \"kubernetes.io/projected/6c07a623-53fe-44a2-9810-5d1137c659c3-kube-api-access-wsgsl\") pod \"cinder-3f03-account-create-update-s5gbm\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.993550 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13613099-2932-4476-8032-82095348fb10-operator-scripts\") pod \"barbican-db-create-89mld\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " pod="openstack/barbican-db-create-89mld" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.994329 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13613099-2932-4476-8032-82095348fb10-operator-scripts\") pod \"barbican-db-create-89mld\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " pod="openstack/barbican-db-create-89mld" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.995016 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c07a623-53fe-44a2-9810-5d1137c659c3-operator-scripts\") pod \"cinder-3f03-account-create-update-s5gbm\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.027819 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t2ntm"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.049176 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsgsl\" (UniqueName: \"kubernetes.io/projected/6c07a623-53fe-44a2-9810-5d1137c659c3-kube-api-access-wsgsl\") pod \"cinder-3f03-account-create-update-s5gbm\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.053896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5mhn\" (UniqueName: \"kubernetes.io/projected/13613099-2932-4476-8032-82095348fb10-kube-api-access-t5mhn\") pod \"barbican-db-create-89mld\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " pod="openstack/barbican-db-create-89mld" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.077159 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ac9c-account-create-update-6cnjz"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.078215 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.080200 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.082945 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-89mld" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.086816 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ac9c-account-create-update-6cnjz"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.095656 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rq77\" (UniqueName: \"kubernetes.io/projected/e00abb05-5932-47c8-9bd4-34014f966013-kube-api-access-7rq77\") pod \"neutron-db-create-t2ntm\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.095770 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00abb05-5932-47c8-9bd4-34014f966013-operator-scripts\") pod \"neutron-db-create-t2ntm\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.123613 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-45fd5-config-7cmw2"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.127579 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.132808 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-45fd5-config-7cmw2"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.148891 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-zbw76"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.150241 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.154556 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.154772 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zbw76"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.156516 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.156669 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.166985 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nv6pf" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.196823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv6g7\" (UniqueName: \"kubernetes.io/projected/1f786311-b5ef-427f-b167-c49267de28c6-kube-api-access-cv6g7\") pod \"neutron-ac9c-account-create-update-6cnjz\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.196895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rq77\" (UniqueName: \"kubernetes.io/projected/e00abb05-5932-47c8-9bd4-34014f966013-kube-api-access-7rq77\") pod \"neutron-db-create-t2ntm\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.196980 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f786311-b5ef-427f-b167-c49267de28c6-operator-scripts\") pod \"neutron-ac9c-account-create-update-6cnjz\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.197003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00abb05-5932-47c8-9bd4-34014f966013-operator-scripts\") pod \"neutron-db-create-t2ntm\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.197697 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00abb05-5932-47c8-9bd4-34014f966013-operator-scripts\") pod \"neutron-db-create-t2ntm\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.214695 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rq77\" (UniqueName: \"kubernetes.io/projected/e00abb05-5932-47c8-9bd4-34014f966013-kube-api-access-7rq77\") pod \"neutron-db-create-t2ntm\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.298763 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-combined-ca-bundle\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.298826 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-config-data\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.298848 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f786311-b5ef-427f-b167-c49267de28c6-operator-scripts\") pod \"neutron-ac9c-account-create-update-6cnjz\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.298902 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcb5r\" (UniqueName: \"kubernetes.io/projected/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-kube-api-access-xcb5r\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.298923 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv6g7\" (UniqueName: \"kubernetes.io/projected/1f786311-b5ef-427f-b167-c49267de28c6-kube-api-access-cv6g7\") pod \"neutron-ac9c-account-create-update-6cnjz\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.299758 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f786311-b5ef-427f-b167-c49267de28c6-operator-scripts\") pod \"neutron-ac9c-account-create-update-6cnjz\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.314130 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.317657 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv6g7\" (UniqueName: \"kubernetes.io/projected/1f786311-b5ef-427f-b167-c49267de28c6-kube-api-access-cv6g7\") pod \"neutron-ac9c-account-create-update-6cnjz\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.400266 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-combined-ca-bundle\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.400340 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-config-data\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.400402 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcb5r\" (UniqueName: \"kubernetes.io/projected/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-kube-api-access-xcb5r\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.401748 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.404813 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-combined-ca-bundle\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.406709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-config-data\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.419184 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcb5r\" (UniqueName: \"kubernetes.io/projected/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-kube-api-access-xcb5r\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.463409 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.192373 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gvh75"] Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.263171 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-89mld"] Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.289557 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-29ee-account-create-update-56zfp"] Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.303446 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t2ntm"] Jan 30 14:05:38 crc kubenswrapper[4793]: W0130 14:05:38.451132 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaec468e_bf72_4c93_8b47_6aac4c7a0b3d.slice/crio-73bb4553c0d51c829203402dacc690b0897fb164b96704ad8590b84c04119a3c WatchSource:0}: Error finding container 73bb4553c0d51c829203402dacc690b0897fb164b96704ad8590b84c04119a3c: Status 404 returned error can't find the container with id 73bb4553c0d51c829203402dacc690b0897fb164b96704ad8590b84c04119a3c Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.453736 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afab5fb9-07ec-48e9-b50b-28e47d11942b" path="/var/lib/kubelet/pods/afab5fb9-07ec-48e9-b50b-28e47d11942b/volumes" Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.471928 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3f03-account-create-update-s5gbm"] Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.474262 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zbw76"] Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.522475 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ac9c-account-create-update-6cnjz"] Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.544201 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-29ee-account-create-update-56zfp" event={"ID":"2392ab6f-ca9b-4211-bd23-a243ce0ee554","Type":"ContainerStarted","Data":"d4cf9631195a64608c3f002c83e4f091ee13070d383c3da9feede1c63959b9ad"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.562033 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbw76" event={"ID":"caec468e-bf72-4c93-8b47-6aac4c7a0b3d","Type":"ContainerStarted","Data":"73bb4553c0d51c829203402dacc690b0897fb164b96704ad8590b84c04119a3c"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.580095 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"c51b1feaec54051ed2fbb26721cebf026aa34164ecab75afe8fb181253d7cf07"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.580146 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"1d8b5c5b0c9368bfd86c628db2535079b0cc886d06e9ceb9edd83c4cc416215b"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.593857 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t2ntm" event={"ID":"e00abb05-5932-47c8-9bd4-34014f966013","Type":"ContainerStarted","Data":"1021ce56a65f1678d6067bce77001cc3379da23303902ddfacdf17e2cf71d0d6"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.605320 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3f03-account-create-update-s5gbm" event={"ID":"6c07a623-53fe-44a2-9810-5d1137c659c3","Type":"ContainerStarted","Data":"ee48e1466c00be71a5cc4e94080113b3179b45afeb01e2591c730c312c7e1330"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.624693 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gvh75" event={"ID":"bfa3c464-d85c-4ea1-816e-7dda86dbb9de","Type":"ContainerStarted","Data":"a98469b953fdea84db2353b46820e7ccea308550c6d0675a79c61f90585562e6"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.634003 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-89mld" event={"ID":"13613099-2932-4476-8032-82095348fb10","Type":"ContainerStarted","Data":"0d6cb9581f933e041346e0d413379c356e5ec4a01767e314546263b6c74898b2"} Jan 30 14:05:39 crc kubenswrapper[4793]: I0130 14:05:39.642006 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ac9c-account-create-update-6cnjz" event={"ID":"1f786311-b5ef-427f-b167-c49267de28c6","Type":"ContainerStarted","Data":"2deeaef8b972645a1d4c815ad2b00a78dfaff0b6cd39c4e7e87229596ae5df93"} Jan 30 14:05:41 crc kubenswrapper[4793]: I0130 14:05:41.657069 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gvh75" event={"ID":"bfa3c464-d85c-4ea1-816e-7dda86dbb9de","Type":"ContainerStarted","Data":"73aa5ec3639d3c82bba61c660ee7af7a234ef59082634808ca0ab14cf7b0d8b7"} Jan 30 14:05:41 crc kubenswrapper[4793]: I0130 14:05:41.658725 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"7a5ace428948da31f74e2caec8ff49c143ac2f3ff7117ecf46cd32e1d24edde9"} Jan 30 14:05:41 crc kubenswrapper[4793]: I0130 14:05:41.682873 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-gvh75" podStartSLOduration=5.6828521720000005 podStartE2EDuration="5.682852172s" podCreationTimestamp="2026-01-30 14:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:41.671932004 +0000 UTC m=+1352.373280505" watchObservedRunningTime="2026-01-30 14:05:41.682852172 +0000 UTC m=+1352.384200733" Jan 30 14:05:42 crc kubenswrapper[4793]: I0130 14:05:42.413893 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:05:42 crc kubenswrapper[4793]: I0130 14:05:42.413972 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:05:42 crc kubenswrapper[4793]: I0130 14:05:42.668734 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"39b8bf95080274fbc27d3409af96f8cd4dee705879ecae4910ae82cb5c5960e8"} Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.676423 4793 generic.go:334] "Generic (PLEG): container finished" podID="6c07a623-53fe-44a2-9810-5d1137c659c3" containerID="b3caaa69aab524adb26fd9c4ff43996ac15d6994d1472ccaa076a079e9b6dba0" exitCode=0 Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.676536 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3f03-account-create-update-s5gbm" event={"ID":"6c07a623-53fe-44a2-9810-5d1137c659c3","Type":"ContainerDied","Data":"b3caaa69aab524adb26fd9c4ff43996ac15d6994d1472ccaa076a079e9b6dba0"} Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.678022 4793 generic.go:334] "Generic (PLEG): container finished" podID="bfa3c464-d85c-4ea1-816e-7dda86dbb9de" containerID="73aa5ec3639d3c82bba61c660ee7af7a234ef59082634808ca0ab14cf7b0d8b7" exitCode=0 Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.678105 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gvh75" event={"ID":"bfa3c464-d85c-4ea1-816e-7dda86dbb9de","Type":"ContainerDied","Data":"73aa5ec3639d3c82bba61c660ee7af7a234ef59082634808ca0ab14cf7b0d8b7"} Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.679872 4793 generic.go:334] "Generic (PLEG): container finished" podID="13613099-2932-4476-8032-82095348fb10" containerID="75d0a8131037e3e42e5261a0799894acdf4d57f9756c3dd89c681177ee69f801" exitCode=0 Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.679938 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-89mld" event={"ID":"13613099-2932-4476-8032-82095348fb10","Type":"ContainerDied","Data":"75d0a8131037e3e42e5261a0799894acdf4d57f9756c3dd89c681177ee69f801"} Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.681746 4793 generic.go:334] "Generic (PLEG): container finished" podID="2392ab6f-ca9b-4211-bd23-a243ce0ee554" containerID="88e81edcf2367a38a7b0e1df9af6001a75b1047fd8c5d669cd70d0dad383c305" exitCode=0 Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.681786 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-29ee-account-create-update-56zfp" event={"ID":"2392ab6f-ca9b-4211-bd23-a243ce0ee554","Type":"ContainerDied","Data":"88e81edcf2367a38a7b0e1df9af6001a75b1047fd8c5d669cd70d0dad383c305"} Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.683528 4793 generic.go:334] "Generic (PLEG): container finished" podID="e00abb05-5932-47c8-9bd4-34014f966013" containerID="4a2aafe80408cac269537f00f3232599775bbba2b58f84e2c22d7bc9ff168a56" exitCode=0 Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.683566 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t2ntm" event={"ID":"e00abb05-5932-47c8-9bd4-34014f966013","Type":"ContainerDied","Data":"4a2aafe80408cac269537f00f3232599775bbba2b58f84e2c22d7bc9ff168a56"} Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.684952 4793 generic.go:334] "Generic (PLEG): container finished" podID="1f786311-b5ef-427f-b167-c49267de28c6" containerID="be7f675ca5c9219f83817d0e2dc9af6d1edad5191618166a3b580984eb47dd17" exitCode=0 Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.684979 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ac9c-account-create-update-6cnjz" event={"ID":"1f786311-b5ef-427f-b167-c49267de28c6","Type":"ContainerDied","Data":"be7f675ca5c9219f83817d0e2dc9af6d1edad5191618166a3b580984eb47dd17"} Jan 30 14:05:44 crc kubenswrapper[4793]: I0130 14:05:44.698398 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"dd13069ceb47825909b33bb601082e34ff4af97379264c16584ddabfa433c75f"} Jan 30 14:05:44 crc kubenswrapper[4793]: I0130 14:05:44.698999 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"05599580ba24b8de745bbb2423d18a9f5f1082fb5f2e3834df84741cbe48e2a8"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.520905 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-89mld" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.539644 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.547958 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.553194 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.559949 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.584086 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.644769 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjl27\" (UniqueName: \"kubernetes.io/projected/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-kube-api-access-gjl27\") pod \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.644905 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv6g7\" (UniqueName: \"kubernetes.io/projected/1f786311-b5ef-427f-b167-c49267de28c6-kube-api-access-cv6g7\") pod \"1f786311-b5ef-427f-b167-c49267de28c6\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.644945 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw8kt\" (UniqueName: \"kubernetes.io/projected/2392ab6f-ca9b-4211-bd23-a243ce0ee554-kube-api-access-tw8kt\") pod \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645003 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5mhn\" (UniqueName: \"kubernetes.io/projected/13613099-2932-4476-8032-82095348fb10-kube-api-access-t5mhn\") pod \"13613099-2932-4476-8032-82095348fb10\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645039 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13613099-2932-4476-8032-82095348fb10-operator-scripts\") pod \"13613099-2932-4476-8032-82095348fb10\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645084 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2392ab6f-ca9b-4211-bd23-a243ce0ee554-operator-scripts\") pod \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645110 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f786311-b5ef-427f-b167-c49267de28c6-operator-scripts\") pod \"1f786311-b5ef-427f-b167-c49267de28c6\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645142 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00abb05-5932-47c8-9bd4-34014f966013-operator-scripts\") pod \"e00abb05-5932-47c8-9bd4-34014f966013\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645228 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rq77\" (UniqueName: \"kubernetes.io/projected/e00abb05-5932-47c8-9bd4-34014f966013-kube-api-access-7rq77\") pod \"e00abb05-5932-47c8-9bd4-34014f966013\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645282 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-operator-scripts\") pod \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.647334 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2392ab6f-ca9b-4211-bd23-a243ce0ee554-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2392ab6f-ca9b-4211-bd23-a243ce0ee554" (UID: "2392ab6f-ca9b-4211-bd23-a243ce0ee554"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.647403 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f786311-b5ef-427f-b167-c49267de28c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f786311-b5ef-427f-b167-c49267de28c6" (UID: "1f786311-b5ef-427f-b167-c49267de28c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.647403 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e00abb05-5932-47c8-9bd4-34014f966013-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e00abb05-5932-47c8-9bd4-34014f966013" (UID: "e00abb05-5932-47c8-9bd4-34014f966013"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.648508 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13613099-2932-4476-8032-82095348fb10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13613099-2932-4476-8032-82095348fb10" (UID: "13613099-2932-4476-8032-82095348fb10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.648771 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bfa3c464-d85c-4ea1-816e-7dda86dbb9de" (UID: "bfa3c464-d85c-4ea1-816e-7dda86dbb9de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.651972 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f786311-b5ef-427f-b167-c49267de28c6-kube-api-access-cv6g7" (OuterVolumeSpecName: "kube-api-access-cv6g7") pod "1f786311-b5ef-427f-b167-c49267de28c6" (UID: "1f786311-b5ef-427f-b167-c49267de28c6"). InnerVolumeSpecName "kube-api-access-cv6g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.652754 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-kube-api-access-gjl27" (OuterVolumeSpecName: "kube-api-access-gjl27") pod "bfa3c464-d85c-4ea1-816e-7dda86dbb9de" (UID: "bfa3c464-d85c-4ea1-816e-7dda86dbb9de"). InnerVolumeSpecName "kube-api-access-gjl27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.652902 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2392ab6f-ca9b-4211-bd23-a243ce0ee554-kube-api-access-tw8kt" (OuterVolumeSpecName: "kube-api-access-tw8kt") pod "2392ab6f-ca9b-4211-bd23-a243ce0ee554" (UID: "2392ab6f-ca9b-4211-bd23-a243ce0ee554"). InnerVolumeSpecName "kube-api-access-tw8kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.653851 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00abb05-5932-47c8-9bd4-34014f966013-kube-api-access-7rq77" (OuterVolumeSpecName: "kube-api-access-7rq77") pod "e00abb05-5932-47c8-9bd4-34014f966013" (UID: "e00abb05-5932-47c8-9bd4-34014f966013"). InnerVolumeSpecName "kube-api-access-7rq77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.661736 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13613099-2932-4476-8032-82095348fb10-kube-api-access-t5mhn" (OuterVolumeSpecName: "kube-api-access-t5mhn") pod "13613099-2932-4476-8032-82095348fb10" (UID: "13613099-2932-4476-8032-82095348fb10"). InnerVolumeSpecName "kube-api-access-t5mhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.723124 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gvh75" event={"ID":"bfa3c464-d85c-4ea1-816e-7dda86dbb9de","Type":"ContainerDied","Data":"a98469b953fdea84db2353b46820e7ccea308550c6d0675a79c61f90585562e6"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.723178 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a98469b953fdea84db2353b46820e7ccea308550c6d0675a79c61f90585562e6" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.723149 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.726783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-89mld" event={"ID":"13613099-2932-4476-8032-82095348fb10","Type":"ContainerDied","Data":"0d6cb9581f933e041346e0d413379c356e5ec4a01767e314546263b6c74898b2"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.726865 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d6cb9581f933e041346e0d413379c356e5ec4a01767e314546263b6c74898b2" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.726937 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-89mld" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.730210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-29ee-account-create-update-56zfp" event={"ID":"2392ab6f-ca9b-4211-bd23-a243ce0ee554","Type":"ContainerDied","Data":"d4cf9631195a64608c3f002c83e4f091ee13070d383c3da9feede1c63959b9ad"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.730241 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4cf9631195a64608c3f002c83e4f091ee13070d383c3da9feede1c63959b9ad" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.730285 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.737344 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbw76" event={"ID":"caec468e-bf72-4c93-8b47-6aac4c7a0b3d","Type":"ContainerStarted","Data":"2ab3f639f24308ca232423f0a32206d071a1ba8c33f3edef5fde8eec5d078500"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.743409 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"11476fa4e67a7467736dc9b47cc14a6a3b2a8960fb2f1a07b6d06a7794a1b35e"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746453 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c07a623-53fe-44a2-9810-5d1137c659c3-operator-scripts\") pod \"6c07a623-53fe-44a2-9810-5d1137c659c3\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746526 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsgsl\" (UniqueName: \"kubernetes.io/projected/6c07a623-53fe-44a2-9810-5d1137c659c3-kube-api-access-wsgsl\") pod \"6c07a623-53fe-44a2-9810-5d1137c659c3\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746880 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5mhn\" (UniqueName: \"kubernetes.io/projected/13613099-2932-4476-8032-82095348fb10-kube-api-access-t5mhn\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746897 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13613099-2932-4476-8032-82095348fb10-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746907 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2392ab6f-ca9b-4211-bd23-a243ce0ee554-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746915 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f786311-b5ef-427f-b167-c49267de28c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746922 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00abb05-5932-47c8-9bd4-34014f966013-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746932 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rq77\" (UniqueName: \"kubernetes.io/projected/e00abb05-5932-47c8-9bd4-34014f966013-kube-api-access-7rq77\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746941 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746950 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjl27\" (UniqueName: \"kubernetes.io/projected/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-kube-api-access-gjl27\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746961 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv6g7\" (UniqueName: \"kubernetes.io/projected/1f786311-b5ef-427f-b167-c49267de28c6-kube-api-access-cv6g7\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746969 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw8kt\" (UniqueName: \"kubernetes.io/projected/2392ab6f-ca9b-4211-bd23-a243ce0ee554-kube-api-access-tw8kt\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.747749 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c07a623-53fe-44a2-9810-5d1137c659c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c07a623-53fe-44a2-9810-5d1137c659c3" (UID: "6c07a623-53fe-44a2-9810-5d1137c659c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.748077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t2ntm" event={"ID":"e00abb05-5932-47c8-9bd4-34014f966013","Type":"ContainerDied","Data":"1021ce56a65f1678d6067bce77001cc3379da23303902ddfacdf17e2cf71d0d6"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.748193 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1021ce56a65f1678d6067bce77001cc3379da23303902ddfacdf17e2cf71d0d6" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.748333 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.751478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ac9c-account-create-update-6cnjz" event={"ID":"1f786311-b5ef-427f-b167-c49267de28c6","Type":"ContainerDied","Data":"2deeaef8b972645a1d4c815ad2b00a78dfaff0b6cd39c4e7e87229596ae5df93"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.751520 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2deeaef8b972645a1d4c815ad2b00a78dfaff0b6cd39c4e7e87229596ae5df93" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.751592 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.758217 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-zbw76" podStartSLOduration=1.8368464100000002 podStartE2EDuration="10.758198982s" podCreationTimestamp="2026-01-30 14:05:37 +0000 UTC" firstStartedPulling="2026-01-30 14:05:38.494018105 +0000 UTC m=+1349.195366596" lastFinishedPulling="2026-01-30 14:05:47.415370677 +0000 UTC m=+1358.116719168" observedRunningTime="2026-01-30 14:05:47.756602623 +0000 UTC m=+1358.457951124" watchObservedRunningTime="2026-01-30 14:05:47.758198982 +0000 UTC m=+1358.459547473" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.763851 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3f03-account-create-update-s5gbm" event={"ID":"6c07a623-53fe-44a2-9810-5d1137c659c3","Type":"ContainerDied","Data":"ee48e1466c00be71a5cc4e94080113b3179b45afeb01e2591c730c312c7e1330"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.763886 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee48e1466c00be71a5cc4e94080113b3179b45afeb01e2591c730c312c7e1330" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.763940 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.769835 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c07a623-53fe-44a2-9810-5d1137c659c3-kube-api-access-wsgsl" (OuterVolumeSpecName: "kube-api-access-wsgsl") pod "6c07a623-53fe-44a2-9810-5d1137c659c3" (UID: "6c07a623-53fe-44a2-9810-5d1137c659c3"). InnerVolumeSpecName "kube-api-access-wsgsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.848501 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c07a623-53fe-44a2-9810-5d1137c659c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.848769 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsgsl\" (UniqueName: \"kubernetes.io/projected/6c07a623-53fe-44a2-9810-5d1137c659c3-kube-api-access-wsgsl\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:48 crc kubenswrapper[4793]: I0130 14:05:48.776204 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"e6907cc4a13ada511c431bc65d19038e6579ea9e06b02d2113fec03a91364c05"} Jan 30 14:05:50 crc kubenswrapper[4793]: I0130 14:05:50.806535 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"fa17303ac81f2866c07d19bb2791483d673952e150dbd38aeac5b7f7eabe7145"} Jan 30 14:05:50 crc kubenswrapper[4793]: I0130 14:05:50.807115 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"d0bb2976fcd9f88d5b17ac1344e29c8a7f6f0d50d91ae2369adc070a90760ebc"} Jan 30 14:05:50 crc kubenswrapper[4793]: I0130 14:05:50.807140 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"85cc081d349d8f25d864c73f8ae1cf92b099090c00f1063588734a402ae9ab35"} Jan 30 14:05:50 crc kubenswrapper[4793]: I0130 14:05:50.807152 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"bda63c33421cef222fc8346e2b9032522aed037330fe15c10e51e24ebf14a667"} Jan 30 14:05:50 crc kubenswrapper[4793]: I0130 14:05:50.807224 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"3147d8a7ab7c1d494fe3d27290744f7596eb55fa8d698807dbfd2b3a8b2c563e"} Jan 30 14:05:51 crc kubenswrapper[4793]: I0130 14:05:51.835755 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"59900819fa6df37dc180cc6c984672f1f3438adc2e7c3ae2fcb67afa9bb927f8"} Jan 30 14:05:51 crc kubenswrapper[4793]: I0130 14:05:51.835827 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"002d93801806dd0f9073e76b9fe0dd9d5b2c07d7aa2f976d76b8b977cf3c98b6"} Jan 30 14:05:51 crc kubenswrapper[4793]: I0130 14:05:51.886113 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.499502482 podStartE2EDuration="50.886039499s" podCreationTimestamp="2026-01-30 14:05:01 +0000 UTC" firstStartedPulling="2026-01-30 14:05:35.454834956 +0000 UTC m=+1346.156183447" lastFinishedPulling="2026-01-30 14:05:49.841371983 +0000 UTC m=+1360.542720464" observedRunningTime="2026-01-30 14:05:51.878349441 +0000 UTC m=+1362.579697932" watchObservedRunningTime="2026-01-30 14:05:51.886039499 +0000 UTC m=+1362.587388020" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.238452 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jxcnx"] Jan 30 14:05:52 crc kubenswrapper[4793]: E0130 14:05:52.238993 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa3c464-d85c-4ea1-816e-7dda86dbb9de" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239010 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa3c464-d85c-4ea1-816e-7dda86dbb9de" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: E0130 14:05:52.239034 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2392ab6f-ca9b-4211-bd23-a243ce0ee554" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239056 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2392ab6f-ca9b-4211-bd23-a243ce0ee554" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: E0130 14:05:52.239068 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00abb05-5932-47c8-9bd4-34014f966013" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239075 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00abb05-5932-47c8-9bd4-34014f966013" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: E0130 14:05:52.239083 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c07a623-53fe-44a2-9810-5d1137c659c3" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239089 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c07a623-53fe-44a2-9810-5d1137c659c3" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: E0130 14:05:52.239100 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f786311-b5ef-427f-b167-c49267de28c6" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239105 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f786311-b5ef-427f-b167-c49267de28c6" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: E0130 14:05:52.239153 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13613099-2932-4476-8032-82095348fb10" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239159 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="13613099-2932-4476-8032-82095348fb10" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239302 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa3c464-d85c-4ea1-816e-7dda86dbb9de" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239324 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2392ab6f-ca9b-4211-bd23-a243ce0ee554" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239334 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c07a623-53fe-44a2-9810-5d1137c659c3" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239360 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f786311-b5ef-427f-b167-c49267de28c6" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239379 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00abb05-5932-47c8-9bd4-34014f966013" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239393 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="13613099-2932-4476-8032-82095348fb10" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.240210 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.244653 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.255574 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jxcnx"] Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.332836 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.333173 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-config\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.333314 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.333603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-svc\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.333695 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.333802 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppqxv\" (UniqueName: \"kubernetes.io/projected/d503f433-f37b-45ed-a7e5-fc845b97e985-kube-api-access-ppqxv\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.435502 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.435591 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-config\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.435653 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.435679 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-svc\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.435696 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.435721 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppqxv\" (UniqueName: \"kubernetes.io/projected/d503f433-f37b-45ed-a7e5-fc845b97e985-kube-api-access-ppqxv\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.436481 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.436615 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-config\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.436841 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-svc\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.437091 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.437542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.454393 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppqxv\" (UniqueName: \"kubernetes.io/projected/d503f433-f37b-45ed-a7e5-fc845b97e985-kube-api-access-ppqxv\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.554028 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.829331 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jxcnx"] Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.844502 4793 generic.go:334] "Generic (PLEG): container finished" podID="caec468e-bf72-4c93-8b47-6aac4c7a0b3d" containerID="2ab3f639f24308ca232423f0a32206d071a1ba8c33f3edef5fde8eec5d078500" exitCode=0 Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.844612 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbw76" event={"ID":"caec468e-bf72-4c93-8b47-6aac4c7a0b3d","Type":"ContainerDied","Data":"2ab3f639f24308ca232423f0a32206d071a1ba8c33f3edef5fde8eec5d078500"} Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.846460 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" event={"ID":"d503f433-f37b-45ed-a7e5-fc845b97e985","Type":"ContainerStarted","Data":"bb05c1a5e71872db9d1f0feebcb1261f0a0b54ef70c537588201ef29f3f19c4c"} Jan 30 14:05:53 crc kubenswrapper[4793]: I0130 14:05:53.854311 4793 generic.go:334] "Generic (PLEG): container finished" podID="2b977757-3d3e-48e5-a1e2-d31ebeda138e" containerID="aba07025654ae635089a8f296dddf9cfb274c709f33abf63aa5399408783166c" exitCode=0 Jan 30 14:05:53 crc kubenswrapper[4793]: I0130 14:05:53.854395 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-btxs9" event={"ID":"2b977757-3d3e-48e5-a1e2-d31ebeda138e","Type":"ContainerDied","Data":"aba07025654ae635089a8f296dddf9cfb274c709f33abf63aa5399408783166c"} Jan 30 14:05:53 crc kubenswrapper[4793]: I0130 14:05:53.857183 4793 generic.go:334] "Generic (PLEG): container finished" podID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerID="d4cf0d819a831c4b22d621ad832e53fd5393704103774f332bf0ecbe457050ee" exitCode=0 Jan 30 14:05:53 crc kubenswrapper[4793]: I0130 14:05:53.857254 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" event={"ID":"d503f433-f37b-45ed-a7e5-fc845b97e985","Type":"ContainerDied","Data":"d4cf0d819a831c4b22d621ad832e53fd5393704103774f332bf0ecbe457050ee"} Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.113158 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.163120 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-config-data\") pod \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.163252 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcb5r\" (UniqueName: \"kubernetes.io/projected/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-kube-api-access-xcb5r\") pod \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.163321 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-combined-ca-bundle\") pod \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.168367 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-kube-api-access-xcb5r" (OuterVolumeSpecName: "kube-api-access-xcb5r") pod "caec468e-bf72-4c93-8b47-6aac4c7a0b3d" (UID: "caec468e-bf72-4c93-8b47-6aac4c7a0b3d"). InnerVolumeSpecName "kube-api-access-xcb5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.186547 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caec468e-bf72-4c93-8b47-6aac4c7a0b3d" (UID: "caec468e-bf72-4c93-8b47-6aac4c7a0b3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.212432 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-config-data" (OuterVolumeSpecName: "config-data") pod "caec468e-bf72-4c93-8b47-6aac4c7a0b3d" (UID: "caec468e-bf72-4c93-8b47-6aac4c7a0b3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.265015 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcb5r\" (UniqueName: \"kubernetes.io/projected/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-kube-api-access-xcb5r\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.265073 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.265083 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.868766 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbw76" event={"ID":"caec468e-bf72-4c93-8b47-6aac4c7a0b3d","Type":"ContainerDied","Data":"73bb4553c0d51c829203402dacc690b0897fb164b96704ad8590b84c04119a3c"} Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.868844 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73bb4553c0d51c829203402dacc690b0897fb164b96704ad8590b84c04119a3c" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.868801 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.871884 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" event={"ID":"d503f433-f37b-45ed-a7e5-fc845b97e985","Type":"ContainerStarted","Data":"80569e834327346f4a6679f3be59a9d590633f158c858f69eb9e397080c34f24"} Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.871955 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.914492 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" podStartSLOduration=2.914466144 podStartE2EDuration="2.914466144s" podCreationTimestamp="2026-01-30 14:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:54.905164277 +0000 UTC m=+1365.606512788" watchObservedRunningTime="2026-01-30 14:05:54.914466144 +0000 UTC m=+1365.615814645" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.256928 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jxcnx"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.287211 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-p79cl"] Jan 30 14:05:55 crc kubenswrapper[4793]: E0130 14:05:55.287739 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caec468e-bf72-4c93-8b47-6aac4c7a0b3d" containerName="keystone-db-sync" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.287758 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="caec468e-bf72-4c93-8b47-6aac4c7a0b3d" containerName="keystone-db-sync" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.287929 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="caec468e-bf72-4c93-8b47-6aac4c7a0b3d" containerName="keystone-db-sync" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.288562 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.295658 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.295863 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.296026 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nv6pf" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.296276 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.296392 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.323880 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p79cl"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.339407 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tnbbm"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.340742 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.372996 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tnbbm"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390575 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zlxm\" (UniqueName: \"kubernetes.io/projected/e6a668ba-7440-4eb2-ba94-29c9f1916625-kube-api-access-9zlxm\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390622 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390666 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-config\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390685 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-fernet-keys\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390722 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390758 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390993 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-credential-keys\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.391119 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5mrl\" (UniqueName: \"kubernetes.io/projected/8195589a-9117-4f82-875b-1e0deec11c01-kube-api-access-t5mrl\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.391200 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-config-data\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.391279 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-scripts\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.391349 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-combined-ca-bundle\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.391443 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-svc\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.492988 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.493921 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494101 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494777 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-credential-keys\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494814 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5mrl\" (UniqueName: \"kubernetes.io/projected/8195589a-9117-4f82-875b-1e0deec11c01-kube-api-access-t5mrl\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494838 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-config-data\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494849 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494898 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-scripts\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494937 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-combined-ca-bundle\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494978 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-svc\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494997 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zlxm\" (UniqueName: \"kubernetes.io/projected/e6a668ba-7440-4eb2-ba94-29c9f1916625-kube-api-access-9zlxm\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.495015 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.495116 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-config\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.495137 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-fernet-keys\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.497041 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-config\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.499306 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-svc\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.508690 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-config-data\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.514698 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-credential-keys\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.515514 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-fernet-keys\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.518401 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-combined-ca-bundle\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.510337 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-scripts\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.518721 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.557709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5mrl\" (UniqueName: \"kubernetes.io/projected/8195589a-9117-4f82-875b-1e0deec11c01-kube-api-access-t5mrl\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.559393 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zlxm\" (UniqueName: \"kubernetes.io/projected/e6a668ba-7440-4eb2-ba94-29c9f1916625-kube-api-access-9zlxm\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.624917 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.634657 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-787bd77877-l9df5"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.636130 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.656897 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-787bd77877-l9df5"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.660668 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.690028 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-f5qx4" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.695791 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.696036 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.696595 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.739461 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-scripts\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.739522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtnhg\" (UniqueName: \"kubernetes.io/projected/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-kube-api-access-vtnhg\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.739558 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-logs\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.739630 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-horizon-secret-key\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.739649 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-config-data\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.831685 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-gpt4t"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.832730 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.837653 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.840698 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtnhg\" (UniqueName: \"kubernetes.io/projected/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-kube-api-access-vtnhg\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.840755 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-logs\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.840820 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-horizon-secret-key\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.840839 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-config-data\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.840878 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-scripts\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.841598 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-scripts\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.841811 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-logs\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.842892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-config-data\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.844744 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2b9wh" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.853960 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-horizon-secret-key\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.878258 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-gpt4t"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.916338 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tnbbm"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.929708 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtnhg\" (UniqueName: \"kubernetes.io/projected/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-kube-api-access-vtnhg\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.946872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-combined-ca-bundle\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.946985 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr8nv\" (UniqueName: \"kubernetes.io/projected/126207f4-9b13-4892-aa15-0616a488af8c-kube-api-access-sr8nv\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.947017 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-db-sync-config-data\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.972189 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4rknj"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.973293 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.978631 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.978815 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5kb4p" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.979165 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.002589 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-kkrt6"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.005259 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.013875 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.014086 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8krj5" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.014210 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.028992 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kkrt6"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.047783 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-kbrx4"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048326 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-db-sync-config-data\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048359 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-combined-ca-bundle\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048381 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-config-data\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048448 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-etc-machine-id\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048479 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkv5g\" (UniqueName: \"kubernetes.io/projected/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-kube-api-access-gkv5g\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-db-sync-config-data\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048525 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr8nv\" (UniqueName: \"kubernetes.io/projected/126207f4-9b13-4892-aa15-0616a488af8c-kube-api-access-sr8nv\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048560 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-scripts\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048577 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-combined-ca-bundle\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.049512 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.079543 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.081681 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-combined-ca-bundle\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.088290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-db-sync-config-data\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.103969 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr8nv\" (UniqueName: \"kubernetes.io/projected/126207f4-9b13-4892-aa15-0616a488af8c-kube-api-access-sr8nv\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.104025 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4rknj"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.139333 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9k2k7"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.139467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:56 crc kubenswrapper[4793]: E0130 14:05:56.139770 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b977757-3d3e-48e5-a1e2-d31ebeda138e" containerName="glance-db-sync" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.139784 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b977757-3d3e-48e5-a1e2-d31ebeda138e" containerName="glance-db-sync" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.139965 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b977757-3d3e-48e5-a1e2-d31ebeda138e" containerName="glance-db-sync" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.140591 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.151811 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-db-sync-config-data\") pod \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.151862 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-combined-ca-bundle\") pod \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.151940 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-config-data\") pod \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152099 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bt5j\" (UniqueName: \"kubernetes.io/projected/2b977757-3d3e-48e5-a1e2-d31ebeda138e-kube-api-access-6bt5j\") pod \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152389 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-db-sync-config-data\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152427 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-config-data\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152493 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644bf4c3-aaaf-45fa-9692-73406a657226-logs\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152548 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152568 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-etc-machine-id\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152597 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd7h4\" (UniqueName: \"kubernetes.io/projected/644bf4c3-aaaf-45fa-9692-73406a657226-kube-api-access-gd7h4\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152612 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkv5g\" (UniqueName: \"kubernetes.io/projected/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-kube-api-access-gkv5g\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152630 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152651 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-scripts\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152667 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-combined-ca-bundle\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152682 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-scripts\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152696 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-combined-ca-bundle\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152722 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-config\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152737 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152753 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqzsn\" (UniqueName: \"kubernetes.io/projected/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-kube-api-access-sqzsn\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152779 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-config-data\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.159543 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.159794 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.160136 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-brjvn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.161944 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-kbrx4"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.168869 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-scripts\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.173495 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-etc-machine-id\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.183773 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-combined-ca-bundle\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.183770 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-db-sync-config-data\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.183966 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2b977757-3d3e-48e5-a1e2-d31ebeda138e" (UID: "2b977757-3d3e-48e5-a1e2-d31ebeda138e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.184557 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-config-data\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.198287 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b977757-3d3e-48e5-a1e2-d31ebeda138e-kube-api-access-6bt5j" (OuterVolumeSpecName: "kube-api-access-6bt5j") pod "2b977757-3d3e-48e5-a1e2-d31ebeda138e" (UID: "2b977757-3d3e-48e5-a1e2-d31ebeda138e"). InnerVolumeSpecName "kube-api-access-6bt5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.200730 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9k2k7"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.208434 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.243679 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkv5g\" (UniqueName: \"kubernetes.io/projected/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-kube-api-access-gkv5g\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.262924 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-config\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.262957 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb7n6\" (UniqueName: \"kubernetes.io/projected/16a2a816-c28c-4d74-848a-2821a9d68d70-kube-api-access-mb7n6\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.262975 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.262998 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqzsn\" (UniqueName: \"kubernetes.io/projected/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-kube-api-access-sqzsn\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263022 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-combined-ca-bundle\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263061 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-config-data\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263084 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-config\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263149 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263171 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644bf4c3-aaaf-45fa-9692-73406a657226-logs\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263197 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263240 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd7h4\" (UniqueName: \"kubernetes.io/projected/644bf4c3-aaaf-45fa-9692-73406a657226-kube-api-access-gd7h4\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263261 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263283 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-scripts\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263300 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-combined-ca-bundle\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263341 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bt5j\" (UniqueName: \"kubernetes.io/projected/2b977757-3d3e-48e5-a1e2-d31ebeda138e-kube-api-access-6bt5j\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263352 4793 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.264984 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-config\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.265555 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.265889 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.266949 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.280835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644bf4c3-aaaf-45fa-9692-73406a657226-logs\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.281650 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-combined-ca-bundle\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.287157 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.290640 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.295210 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.307283 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-scripts\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.315843 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.316776 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.318126 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-config-data\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.318329 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b977757-3d3e-48e5-a1e2-d31ebeda138e" (UID: "2b977757-3d3e-48e5-a1e2-d31ebeda138e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.319638 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.319937 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd7h4\" (UniqueName: \"kubernetes.io/projected/644bf4c3-aaaf-45fa-9692-73406a657226-kube-api-access-gd7h4\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.338659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqzsn\" (UniqueName: \"kubernetes.io/projected/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-kube-api-access-sqzsn\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.371174 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-run-httpd\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374101 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-log-httpd\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374159 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-scripts\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374185 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-config-data\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374205 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374246 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb7n6\" (UniqueName: \"kubernetes.io/projected/16a2a816-c28c-4d74-848a-2821a9d68d70-kube-api-access-mb7n6\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374283 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-combined-ca-bundle\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374300 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374324 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sld6q\" (UniqueName: \"kubernetes.io/projected/f85d7b0d-5452-4175-842b-7d1505eb82e0-kube-api-access-sld6q\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374357 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-config\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374483 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.383402 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.389933 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-combined-ca-bundle\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.408905 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.420653 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb7n6\" (UniqueName: \"kubernetes.io/projected/16a2a816-c28c-4d74-848a-2821a9d68d70-kube-api-access-mb7n6\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.421521 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-config\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.448424 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-config-data" (OuterVolumeSpecName: "config-data") pod "2b977757-3d3e-48e5-a1e2-d31ebeda138e" (UID: "2b977757-3d3e-48e5-a1e2-d31ebeda138e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.450950 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8698dbdc7f-7rwcn"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.452388 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.471499 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8698dbdc7f-7rwcn"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483223 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-run-httpd\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483328 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-log-httpd\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483445 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-scripts\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483475 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-config-data\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483502 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mcgn\" (UniqueName: \"kubernetes.io/projected/1f30f95a-540c-4e30-acce-229ae81b4215-kube-api-access-7mcgn\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483660 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483710 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sld6q\" (UniqueName: \"kubernetes.io/projected/f85d7b0d-5452-4175-842b-7d1505eb82e0-kube-api-access-sld6q\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483811 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f30f95a-540c-4e30-acce-229ae81b4215-logs\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483860 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-scripts\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483897 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-config-data\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483924 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f30f95a-540c-4e30-acce-229ae81b4215-horizon-secret-key\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483989 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.484765 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-run-httpd\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.485965 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-log-httpd\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.491572 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-scripts\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.491912 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.494361 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.501502 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-config-data\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.507987 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.585692 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f30f95a-540c-4e30-acce-229ae81b4215-logs\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.591570 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-scripts\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.603708 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-config-data\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.603885 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f30f95a-540c-4e30-acce-229ae81b4215-horizon-secret-key\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.604192 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mcgn\" (UniqueName: \"kubernetes.io/projected/1f30f95a-540c-4e30-acce-229ae81b4215-kube-api-access-7mcgn\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.589693 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sld6q\" (UniqueName: \"kubernetes.io/projected/f85d7b0d-5452-4175-842b-7d1505eb82e0-kube-api-access-sld6q\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.586571 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f30f95a-540c-4e30-acce-229ae81b4215-logs\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.606952 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-config-data\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.596814 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-scripts\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.630492 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.632796 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f30f95a-540c-4e30-acce-229ae81b4215-horizon-secret-key\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.642835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mcgn\" (UniqueName: \"kubernetes.io/projected/1f30f95a-540c-4e30-acce-229ae81b4215-kube-api-access-7mcgn\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.796861 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tnbbm"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.803841 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.964529 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-btxs9" event={"ID":"2b977757-3d3e-48e5-a1e2-d31ebeda138e","Type":"ContainerDied","Data":"7ef1978da215da441ac8cf72de6c6774bfd0f063eea75236ae6171402912d11b"} Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.964580 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ef1978da215da441ac8cf72de6c6774bfd0f063eea75236ae6171402912d11b" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.964670 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.995098 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerName="dnsmasq-dns" containerID="cri-o://80569e834327346f4a6679f3be59a9d590633f158c858f69eb9e397080c34f24" gracePeriod=10 Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.995222 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" event={"ID":"e6a668ba-7440-4eb2-ba94-29c9f1916625","Type":"ContainerStarted","Data":"c2a515cc3d3f339a5e32e30b902a887bb34f4e6875238ac55c8088138646231b"} Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.123584 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p79cl"] Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.370392 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-gpt4t"] Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.454954 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4rknj"] Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.694087 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-kbrx4"] Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.735825 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kkrt6"] Jan 30 14:05:57 crc kubenswrapper[4793]: W0130 14:05:57.752812 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod056322cc_65a1_41ad_84a8_a01c8b7e2ac3.slice/crio-2b23e0d92930d14490b62a976bcd1c55e52803bb1166bdf22fd572ab7384aac5 WatchSource:0}: Error finding container 2b23e0d92930d14490b62a976bcd1c55e52803bb1166bdf22fd572ab7384aac5: Status 404 returned error can't find the container with id 2b23e0d92930d14490b62a976bcd1c55e52803bb1166bdf22fd572ab7384aac5 Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.909122 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-kbrx4"] Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.930867 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zbt8c"] Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.937559 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.951543 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9k2k7"] Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:57.988919 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zbt8c"] Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:57.988953 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-787bd77877-l9df5"] Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.006227 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.096553 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ptwm\" (UniqueName: \"kubernetes.io/projected/b318d131-c8b9-41a5-a500-f8a9405e0074-kube-api-access-6ptwm\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.096803 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.096836 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.096871 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.096894 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.096923 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-config\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.117353 4793 generic.go:334] "Generic (PLEG): container finished" podID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerID="80569e834327346f4a6679f3be59a9d590633f158c858f69eb9e397080c34f24" exitCode=0 Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.117438 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" event={"ID":"d503f433-f37b-45ed-a7e5-fc845b97e985","Type":"ContainerDied","Data":"80569e834327346f4a6679f3be59a9d590633f158c858f69eb9e397080c34f24"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.137770 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kkrt6" event={"ID":"644bf4c3-aaaf-45fa-9692-73406a657226","Type":"ContainerStarted","Data":"b3e8e1acd1cd561d606e595452b7ed4d9ad040eaf08a66d7af08e7308d6d261e"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.180238 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8698dbdc7f-7rwcn"] Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.183104 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" event={"ID":"056322cc-65a1-41ad-84a8-a01c8b7e2ac3","Type":"ContainerStarted","Data":"2b23e0d92930d14490b62a976bcd1c55e52803bb1166bdf22fd572ab7384aac5"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.207414 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-config\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.207774 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ptwm\" (UniqueName: \"kubernetes.io/projected/b318d131-c8b9-41a5-a500-f8a9405e0074-kube-api-access-6ptwm\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.207814 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.207883 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.207954 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.208012 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.209008 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.209279 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.209530 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-config\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.210023 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.210532 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.231814 4793 generic.go:334] "Generic (PLEG): container finished" podID="e6a668ba-7440-4eb2-ba94-29c9f1916625" containerID="15d506971acedaa7bb99095c847196af33271345f5a9e05340688d33bdaff291" exitCode=0 Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.231883 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" event={"ID":"e6a668ba-7440-4eb2-ba94-29c9f1916625","Type":"ContainerDied","Data":"15d506971acedaa7bb99095c847196af33271345f5a9e05340688d33bdaff291"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.234895 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9k2k7" event={"ID":"16a2a816-c28c-4d74-848a-2821a9d68d70","Type":"ContainerStarted","Data":"fc613fe2ad6c1be056bd77d206032a6320f75af4b1f9de343208058c0b3d8709"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.256622 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ptwm\" (UniqueName: \"kubernetes.io/projected/b318d131-c8b9-41a5-a500-f8a9405e0074-kube-api-access-6ptwm\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.296732 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-787bd77877-l9df5" event={"ID":"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88","Type":"ContainerStarted","Data":"0b3a3424f23b7d6c10b04af0639314688a591e4cf45a995b12aa2a751c3d037b"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.318187 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p79cl" event={"ID":"8195589a-9117-4f82-875b-1e0deec11c01","Type":"ContainerStarted","Data":"c0abfc20236991093d7e8e2afcdd95243ff40e4122ba5c47744049c4a654a438"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.318247 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p79cl" event={"ID":"8195589a-9117-4f82-875b-1e0deec11c01","Type":"ContainerStarted","Data":"0235cbe667410a12fd0f43900b65c18ce6c6b1f1487e76a077fc7aad8e3b66de"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.373484 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4rknj" event={"ID":"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd","Type":"ContainerStarted","Data":"6d4763986d1b4a11b99da97ae431575d2b3082d3a2bdcdbedb9c248948af623d"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.377336 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gpt4t" event={"ID":"126207f4-9b13-4892-aa15-0616a488af8c","Type":"ContainerStarted","Data":"951aaae1b3a62ddc2954a80d0b215b523c731d1bf004dc9a3391b04cbf64290b"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.400309 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-p79cl" podStartSLOduration=3.400289783 podStartE2EDuration="3.400289783s" podCreationTimestamp="2026-01-30 14:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:58.366648558 +0000 UTC m=+1369.067997049" watchObservedRunningTime="2026-01-30 14:05:58.400289783 +0000 UTC m=+1369.101638274" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.416021 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.502850 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.664191 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-svc\") pod \"d503f433-f37b-45ed-a7e5-fc845b97e985\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.664239 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-sb\") pod \"d503f433-f37b-45ed-a7e5-fc845b97e985\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.664357 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppqxv\" (UniqueName: \"kubernetes.io/projected/d503f433-f37b-45ed-a7e5-fc845b97e985-kube-api-access-ppqxv\") pod \"d503f433-f37b-45ed-a7e5-fc845b97e985\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.664425 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-nb\") pod \"d503f433-f37b-45ed-a7e5-fc845b97e985\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.664560 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-config\") pod \"d503f433-f37b-45ed-a7e5-fc845b97e985\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.664583 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-swift-storage-0\") pod \"d503f433-f37b-45ed-a7e5-fc845b97e985\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.686238 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d503f433-f37b-45ed-a7e5-fc845b97e985-kube-api-access-ppqxv" (OuterVolumeSpecName: "kube-api-access-ppqxv") pod "d503f433-f37b-45ed-a7e5-fc845b97e985" (UID: "d503f433-f37b-45ed-a7e5-fc845b97e985"). InnerVolumeSpecName "kube-api-access-ppqxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.766786 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppqxv\" (UniqueName: \"kubernetes.io/projected/d503f433-f37b-45ed-a7e5-fc845b97e985-kube-api-access-ppqxv\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.776254 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:05:58 crc kubenswrapper[4793]: E0130 14:05:58.777635 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerName="init" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.777655 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerName="init" Jan 30 14:05:58 crc kubenswrapper[4793]: E0130 14:05:58.777672 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerName="dnsmasq-dns" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.777678 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerName="dnsmasq-dns" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.780563 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerName="dnsmasq-dns" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.784520 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.812606 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jb79g" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.821319 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.821511 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.866595 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.869760 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d503f433-f37b-45ed-a7e5-fc845b97e985" (UID: "d503f433-f37b-45ed-a7e5-fc845b97e985"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.879783 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.885420 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d503f433-f37b-45ed-a7e5-fc845b97e985" (UID: "d503f433-f37b-45ed-a7e5-fc845b97e985"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.893453 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d503f433-f37b-45ed-a7e5-fc845b97e985" (UID: "d503f433-f37b-45ed-a7e5-fc845b97e985"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.969193 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-config" (OuterVolumeSpecName: "config") pod "d503f433-f37b-45ed-a7e5-fc845b97e985" (UID: "d503f433-f37b-45ed-a7e5-fc845b97e985"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017353 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017654 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-logs\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017730 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggxjm\" (UniqueName: \"kubernetes.io/projected/95920882-93c3-4a03-bfc1-cfeaeef10bd6-kube-api-access-ggxjm\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017749 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-config-data\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017794 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017850 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017972 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-scripts\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.018028 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.018072 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.018083 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.019465 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d503f433-f37b-45ed-a7e5-fc845b97e985" (UID: "d503f433-f37b-45ed-a7e5-fc845b97e985"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.061759 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:05:59 crc kubenswrapper[4793]: E0130 14:05:59.062401 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data glance httpd-run kube-api-access-ggxjm logs scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-external-api-0" podUID="95920882-93c3-4a03-bfc1-cfeaeef10bd6" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.101980 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-787bd77877-l9df5"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.121135 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggxjm\" (UniqueName: \"kubernetes.io/projected/95920882-93c3-4a03-bfc1-cfeaeef10bd6-kube-api-access-ggxjm\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.121349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-config-data\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.121430 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.121537 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.122199 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-scripts\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.122325 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.122402 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-logs\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.122518 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.122745 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.123120 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.126957 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-logs\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.127727 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.132934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-scripts\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.136020 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-config-data\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.184734 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggxjm\" (UniqueName: \"kubernetes.io/projected/95920882-93c3-4a03-bfc1-cfeaeef10bd6-kube-api-access-ggxjm\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.189267 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.204109 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b66cd9fcf-c94kp"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.207347 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.247761 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b66cd9fcf-c94kp"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.300025 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.304837 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.323613 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.329886 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wstbg\" (UniqueName: \"kubernetes.io/projected/ecab991a-220f-4b09-a1fa-f43fef3d0be5-kube-api-access-wstbg\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.329962 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-config-data\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.329991 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ecab991a-220f-4b09-a1fa-f43fef3d0be5-horizon-secret-key\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.330030 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-scripts\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.330087 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecab991a-220f-4b09-a1fa-f43fef3d0be5-logs\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.349895 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.407142 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.433407 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.433613 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-config-data\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.433726 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ecab991a-220f-4b09-a1fa-f43fef3d0be5-horizon-secret-key\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.433808 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.433874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.433948 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-scripts\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434029 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-logs\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434118 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434193 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecab991a-220f-4b09-a1fa-f43fef3d0be5-logs\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434272 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt5hf\" (UniqueName: \"kubernetes.io/projected/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-kube-api-access-tt5hf\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434350 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434423 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wstbg\" (UniqueName: \"kubernetes.io/projected/ecab991a-220f-4b09-a1fa-f43fef3d0be5-kube-api-access-wstbg\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434705 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-config-data\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.435188 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecab991a-220f-4b09-a1fa-f43fef3d0be5-logs\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.435789 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-scripts\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.443929 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ecab991a-220f-4b09-a1fa-f43fef3d0be5-horizon-secret-key\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.474295 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.474431 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerStarted","Data":"50cb694f90f1d6a53f515af750afb638a61a81c6b156cbc3d6081c5686d9e08c"} Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.496601 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wstbg\" (UniqueName: \"kubernetes.io/projected/ecab991a-220f-4b09-a1fa-f43fef3d0be5-kube-api-access-wstbg\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.514022 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" event={"ID":"d503f433-f37b-45ed-a7e5-fc845b97e985","Type":"ContainerDied","Data":"bb05c1a5e71872db9d1f0feebcb1261f0a0b54ef70c537588201ef29f3f19c4c"} Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.514246 4793 scope.go:117] "RemoveContainer" containerID="80569e834327346f4a6679f3be59a9d590633f158c858f69eb9e397080c34f24" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.514454 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.542619 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-sb\") pod \"e6a668ba-7440-4eb2-ba94-29c9f1916625\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.542811 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-config\") pod \"e6a668ba-7440-4eb2-ba94-29c9f1916625\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.542868 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-svc\") pod \"e6a668ba-7440-4eb2-ba94-29c9f1916625\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.542891 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-nb\") pod \"e6a668ba-7440-4eb2-ba94-29c9f1916625\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543007 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-swift-storage-0\") pod \"e6a668ba-7440-4eb2-ba94-29c9f1916625\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543057 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zlxm\" (UniqueName: \"kubernetes.io/projected/e6a668ba-7440-4eb2-ba94-29c9f1916625-kube-api-access-9zlxm\") pod \"e6a668ba-7440-4eb2-ba94-29c9f1916625\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543314 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543337 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543387 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-logs\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543403 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543444 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt5hf\" (UniqueName: \"kubernetes.io/projected/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-kube-api-access-tt5hf\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543478 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543524 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.547795 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.557288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-logs\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.577720 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.599187 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.600435 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.608359 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a668ba-7440-4eb2-ba94-29c9f1916625-kube-api-access-9zlxm" (OuterVolumeSpecName: "kube-api-access-9zlxm") pod "e6a668ba-7440-4eb2-ba94-29c9f1916625" (UID: "e6a668ba-7440-4eb2-ba94-29c9f1916625"). InnerVolumeSpecName "kube-api-access-9zlxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.611372 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9k2k7" event={"ID":"16a2a816-c28c-4d74-848a-2821a9d68d70","Type":"ContainerStarted","Data":"3517173292e25a5ef43fbeee36943507781e2a1f6b290f89494c3211b1e796ba"} Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.636124 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.641134 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.651214 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zlxm\" (UniqueName: \"kubernetes.io/projected/e6a668ba-7440-4eb2-ba94-29c9f1916625-kube-api-access-9zlxm\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.666483 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zbt8c"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.707368 4793 generic.go:334] "Generic (PLEG): container finished" podID="056322cc-65a1-41ad-84a8-a01c8b7e2ac3" containerID="baf53c748c6a6992b01298fe55003ed2cd87ea55e116f674ef10391d191eb4a2" exitCode=0 Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.707472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" event={"ID":"056322cc-65a1-41ad-84a8-a01c8b7e2ac3","Type":"ContainerDied","Data":"baf53c748c6a6992b01298fe55003ed2cd87ea55e116f674ef10391d191eb4a2"} Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.721126 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8698dbdc7f-7rwcn" event={"ID":"1f30f95a-540c-4e30-acce-229ae81b4215","Type":"ContainerStarted","Data":"195ee6e5e0794333cda4ea233faeb9fe7d4329bd8a1e2d492ad5c4a6790f9c89"} Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.721271 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.768802 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt5hf\" (UniqueName: \"kubernetes.io/projected/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-kube-api-access-tt5hf\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.800804 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9k2k7" podStartSLOduration=4.800781967 podStartE2EDuration="4.800781967s" podCreationTimestamp="2026-01-30 14:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:59.677190137 +0000 UTC m=+1370.378538638" watchObservedRunningTime="2026-01-30 14:05:59.800781967 +0000 UTC m=+1370.502130468" Jan 30 14:05:59 crc kubenswrapper[4793]: W0130 14:05:59.835165 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb318d131_c8b9_41a5_a500_f8a9405e0074.slice/crio-de747f3964ebf14001721dc6443bbc5eded45594ed34eae45ced08a6517ebd85 WatchSource:0}: Error finding container de747f3964ebf14001721dc6443bbc5eded45594ed34eae45ced08a6517ebd85: Status 404 returned error can't find the container with id de747f3964ebf14001721dc6443bbc5eded45594ed34eae45ced08a6517ebd85 Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.848982 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e6a668ba-7440-4eb2-ba94-29c9f1916625" (UID: "e6a668ba-7440-4eb2-ba94-29c9f1916625"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.891123 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.899645 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e6a668ba-7440-4eb2-ba94-29c9f1916625" (UID: "e6a668ba-7440-4eb2-ba94-29c9f1916625"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.929387 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e6a668ba-7440-4eb2-ba94-29c9f1916625" (UID: "e6a668ba-7440-4eb2-ba94-29c9f1916625"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.947153 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:05:59.993172 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:05:59.993201 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.037011 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-config" (OuterVolumeSpecName: "config") pod "e6a668ba-7440-4eb2-ba94-29c9f1916625" (UID: "e6a668ba-7440-4eb2-ba94-29c9f1916625"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.063236 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e6a668ba-7440-4eb2-ba94-29c9f1916625" (UID: "e6a668ba-7440-4eb2-ba94-29c9f1916625"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.097161 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.097346 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.107599 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.117718 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jxcnx"] Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.133460 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jxcnx"] Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.143092 4793 scope.go:117] "RemoveContainer" containerID="d4cf0d819a831c4b22d621ad832e53fd5393704103774f332bf0ecbe457050ee" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200616 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-scripts\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200661 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-config-data\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200762 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-logs\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200802 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-httpd-run\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200926 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-combined-ca-bundle\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200955 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggxjm\" (UniqueName: \"kubernetes.io/projected/95920882-93c3-4a03-bfc1-cfeaeef10bd6-kube-api-access-ggxjm\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.201389 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.203911 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-logs" (OuterVolumeSpecName: "logs") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.211255 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.211804 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95920882-93c3-4a03-bfc1-cfeaeef10bd6-kube-api-access-ggxjm" (OuterVolumeSpecName: "kube-api-access-ggxjm") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "kube-api-access-ggxjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.218363 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-scripts" (OuterVolumeSpecName: "scripts") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.221226 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.221326 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-config-data" (OuterVolumeSpecName: "config-data") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.235679 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312067 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312105 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312137 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312147 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312158 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggxjm\" (UniqueName: \"kubernetes.io/projected/95920882-93c3-4a03-bfc1-cfeaeef10bd6-kube-api-access-ggxjm\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312166 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312177 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.352567 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.418393 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.444251 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" path="/var/lib/kubelet/pods/d503f433-f37b-45ed-a7e5-fc845b97e985/volumes" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.615728 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.741757 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b66cd9fcf-c94kp"] Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.742311 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-nb\") pod \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.742437 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqzsn\" (UniqueName: \"kubernetes.io/projected/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-kube-api-access-sqzsn\") pod \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.742456 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-sb\") pod \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.742484 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-swift-storage-0\") pod \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.742535 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-config\") pod \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.742552 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-svc\") pod \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.766390 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-kube-api-access-sqzsn" (OuterVolumeSpecName: "kube-api-access-sqzsn") pod "056322cc-65a1-41ad-84a8-a01c8b7e2ac3" (UID: "056322cc-65a1-41ad-84a8-a01c8b7e2ac3"). InnerVolumeSpecName "kube-api-access-sqzsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.787257 4793 generic.go:334] "Generic (PLEG): container finished" podID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerID="8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d" exitCode=0 Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.787311 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" event={"ID":"b318d131-c8b9-41a5-a500-f8a9405e0074","Type":"ContainerDied","Data":"8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d"} Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.787337 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" event={"ID":"b318d131-c8b9-41a5-a500-f8a9405e0074","Type":"ContainerStarted","Data":"de747f3964ebf14001721dc6443bbc5eded45594ed34eae45ced08a6517ebd85"} Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.789165 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "056322cc-65a1-41ad-84a8-a01c8b7e2ac3" (UID: "056322cc-65a1-41ad-84a8-a01c8b7e2ac3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.797422 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" event={"ID":"e6a668ba-7440-4eb2-ba94-29c9f1916625","Type":"ContainerDied","Data":"c2a515cc3d3f339a5e32e30b902a887bb34f4e6875238ac55c8088138646231b"} Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.797466 4793 scope.go:117] "RemoveContainer" containerID="15d506971acedaa7bb99095c847196af33271345f5a9e05340688d33bdaff291" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.797569 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.797859 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "056322cc-65a1-41ad-84a8-a01c8b7e2ac3" (UID: "056322cc-65a1-41ad-84a8-a01c8b7e2ac3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.798310 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "056322cc-65a1-41ad-84a8-a01c8b7e2ac3" (UID: "056322cc-65a1-41ad-84a8-a01c8b7e2ac3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.838108 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" event={"ID":"056322cc-65a1-41ad-84a8-a01c8b7e2ac3","Type":"ContainerDied","Data":"2b23e0d92930d14490b62a976bcd1c55e52803bb1166bdf22fd572ab7384aac5"} Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.838172 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.838543 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.858291 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "056322cc-65a1-41ad-84a8-a01c8b7e2ac3" (UID: "056322cc-65a1-41ad-84a8-a01c8b7e2ac3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.861096 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqzsn\" (UniqueName: \"kubernetes.io/projected/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-kube-api-access-sqzsn\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.861128 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.861138 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.861147 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.895145 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-config" (OuterVolumeSpecName: "config") pod "056322cc-65a1-41ad-84a8-a01c8b7e2ac3" (UID: "056322cc-65a1-41ad-84a8-a01c8b7e2ac3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.968672 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.968927 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.030503 4793 scope.go:117] "RemoveContainer" containerID="baf53c748c6a6992b01298fe55003ed2cd87ea55e116f674ef10391d191eb4a2" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.040087 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tnbbm"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.068440 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tnbbm"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.097199 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.153393 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.280973 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:01 crc kubenswrapper[4793]: E0130 14:06:01.281398 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a668ba-7440-4eb2-ba94-29c9f1916625" containerName="init" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.281411 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a668ba-7440-4eb2-ba94-29c9f1916625" containerName="init" Jan 30 14:06:01 crc kubenswrapper[4793]: E0130 14:06:01.281421 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056322cc-65a1-41ad-84a8-a01c8b7e2ac3" containerName="init" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.281427 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="056322cc-65a1-41ad-84a8-a01c8b7e2ac3" containerName="init" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.281623 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a668ba-7440-4eb2-ba94-29c9f1916625" containerName="init" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.281642 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="056322cc-65a1-41ad-84a8-a01c8b7e2ac3" containerName="init" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.282990 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.292580 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.300873 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.334435 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.386889 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-logs\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.386942 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.386960 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.387013 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.387041 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9szd\" (UniqueName: \"kubernetes.io/projected/95da467e-d092-4859-b82e-669b122856c9-kube-api-access-v9szd\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.387076 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.387094 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.390332 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-kbrx4"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.408102 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-kbrx4"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.488904 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-logs\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.489601 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-logs\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.490472 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.490501 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.490644 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.490680 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9szd\" (UniqueName: \"kubernetes.io/projected/95da467e-d092-4859-b82e-669b122856c9-kube-api-access-v9szd\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.490699 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.490748 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.491085 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.492099 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.498687 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.500860 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.519255 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9szd\" (UniqueName: \"kubernetes.io/projected/95da467e-d092-4859-b82e-669b122856c9-kube-api-access-v9szd\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.521333 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.533106 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.641333 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.866776 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerStarted","Data":"abb829370f6052fa5b93898ca6acb8788a4543ea051b65ba7f0f97b896bb3dd6"} Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.872850 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" event={"ID":"b318d131-c8b9-41a5-a500-f8a9405e0074","Type":"ContainerStarted","Data":"43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630"} Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.876561 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47d5b1a9-edbe-4b43-8395-cb1fa337ad28","Type":"ContainerStarted","Data":"75d5f63d74ded6af6fe90efd5846a2c83282bfbfb878df2f8d8cd8df32ecf051"} Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.895606 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" podStartSLOduration=4.895584603 podStartE2EDuration="4.895584603s" podCreationTimestamp="2026-01-30 14:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:01.891163625 +0000 UTC m=+1372.592512116" watchObservedRunningTime="2026-01-30 14:06:01.895584603 +0000 UTC m=+1372.596933094" Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.443431 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056322cc-65a1-41ad-84a8-a01c8b7e2ac3" path="/var/lib/kubelet/pods/056322cc-65a1-41ad-84a8-a01c8b7e2ac3/volumes" Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.447002 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95920882-93c3-4a03-bfc1-cfeaeef10bd6" path="/var/lib/kubelet/pods/95920882-93c3-4a03-bfc1-cfeaeef10bd6/volumes" Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.447564 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a668ba-7440-4eb2-ba94-29c9f1916625" path="/var/lib/kubelet/pods/e6a668ba-7440-4eb2-ba94-29c9f1916625/volumes" Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.696330 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.917858 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47d5b1a9-edbe-4b43-8395-cb1fa337ad28","Type":"ContainerStarted","Data":"b68b41c83a25ce40914355b04f296d07cb763ba1b3cf6b31c3970b27a2f376fd"} Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.953774 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95da467e-d092-4859-b82e-669b122856c9","Type":"ContainerStarted","Data":"bb1822de99167e67b698d62e79b73155f8af99f3f73a4a9033d2f811e3931452"} Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.954039 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:06:05 crc kubenswrapper[4793]: I0130 14:06:05.011646 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95da467e-d092-4859-b82e-669b122856c9","Type":"ContainerStarted","Data":"40b8e80d53a26f06d0539ee09f487d43f02d75e204ed248460157c9f9bd2932e"} Jan 30 14:06:05 crc kubenswrapper[4793]: I0130 14:06:05.013646 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47d5b1a9-edbe-4b43-8395-cb1fa337ad28","Type":"ContainerStarted","Data":"4eea34353468e7b48cc7a2b7e05df1b19511a82085c8f2adf2ba94e4764bc33e"} Jan 30 14:06:05 crc kubenswrapper[4793]: I0130 14:06:05.044705 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.044686866 podStartE2EDuration="6.044686866s" podCreationTimestamp="2026-01-30 14:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:05.037169102 +0000 UTC m=+1375.738517593" watchObservedRunningTime="2026-01-30 14:06:05.044686866 +0000 UTC m=+1375.746035357" Jan 30 14:06:06 crc kubenswrapper[4793]: I0130 14:06:06.028684 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95da467e-d092-4859-b82e-669b122856c9","Type":"ContainerStarted","Data":"7d4ae9a017860f2c49c7a68d93ab79a59e3223d425104405ff48022e02c702d7"} Jan 30 14:06:06 crc kubenswrapper[4793]: I0130 14:06:06.069664 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.069645114 podStartE2EDuration="5.069645114s" podCreationTimestamp="2026-01-30 14:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:06.055100118 +0000 UTC m=+1376.756448609" watchObservedRunningTime="2026-01-30 14:06:06.069645114 +0000 UTC m=+1376.770993605" Jan 30 14:06:08 crc kubenswrapper[4793]: I0130 14:06:08.418320 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:06:08 crc kubenswrapper[4793]: I0130 14:06:08.512174 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tp7zf"] Jan 30 14:06:08 crc kubenswrapper[4793]: I0130 14:06:08.512385 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-tp7zf" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="dnsmasq-dns" containerID="cri-o://610455f7ee877cbfe48a7dcf3922577b44a3ba262f3673e879a83bee7f9c298d" gracePeriod=10 Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.270024 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.270250 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-log" containerID="cri-o://40b8e80d53a26f06d0539ee09f487d43f02d75e204ed248460157c9f9bd2932e" gracePeriod=30 Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.272366 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-httpd" containerID="cri-o://7d4ae9a017860f2c49c7a68d93ab79a59e3223d425104405ff48022e02c702d7" gracePeriod=30 Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.413262 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.413832 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-log" containerID="cri-o://b68b41c83a25ce40914355b04f296d07cb763ba1b3cf6b31c3970b27a2f376fd" gracePeriod=30 Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.413905 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-httpd" containerID="cri-o://4eea34353468e7b48cc7a2b7e05df1b19511a82085c8f2adf2ba94e4764bc33e" gracePeriod=30 Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.433500 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8698dbdc7f-7rwcn"] Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.496678 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b9fc5f8f6-nj7xv"] Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.501221 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.508852 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.523982 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b9fc5f8f6-nj7xv"] Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.609447 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-config-data\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.609899 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-scripts\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.610032 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-combined-ca-bundle\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.610271 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-horizon-secret-key\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.610383 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjs5m\" (UniqueName: \"kubernetes.io/projected/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-kube-api-access-sjs5m\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.610550 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-horizon-tls-certs\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.610709 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-logs\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712148 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-horizon-tls-certs\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712251 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-logs\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712302 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-config-data\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712360 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-scripts\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712383 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-combined-ca-bundle\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712405 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-horizon-secret-key\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712462 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjs5m\" (UniqueName: \"kubernetes.io/projected/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-kube-api-access-sjs5m\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.713550 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-logs\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.713832 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-scripts\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.714611 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-config-data\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.719546 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-horizon-secret-key\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.719751 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-combined-ca-bundle\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.741191 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-horizon-tls-certs\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.742357 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjs5m\" (UniqueName: \"kubernetes.io/projected/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-kube-api-access-sjs5m\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.831058 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.105726 4793 generic.go:334] "Generic (PLEG): container finished" podID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerID="610455f7ee877cbfe48a7dcf3922577b44a3ba262f3673e879a83bee7f9c298d" exitCode=0 Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.105790 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tp7zf" event={"ID":"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1","Type":"ContainerDied","Data":"610455f7ee877cbfe48a7dcf3922577b44a3ba262f3673e879a83bee7f9c298d"} Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.111006 4793 generic.go:334] "Generic (PLEG): container finished" podID="95da467e-d092-4859-b82e-669b122856c9" containerID="7d4ae9a017860f2c49c7a68d93ab79a59e3223d425104405ff48022e02c702d7" exitCode=0 Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.111031 4793 generic.go:334] "Generic (PLEG): container finished" podID="95da467e-d092-4859-b82e-669b122856c9" containerID="40b8e80d53a26f06d0539ee09f487d43f02d75e204ed248460157c9f9bd2932e" exitCode=143 Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.111084 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95da467e-d092-4859-b82e-669b122856c9","Type":"ContainerDied","Data":"7d4ae9a017860f2c49c7a68d93ab79a59e3223d425104405ff48022e02c702d7"} Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.111130 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95da467e-d092-4859-b82e-669b122856c9","Type":"ContainerDied","Data":"40b8e80d53a26f06d0539ee09f487d43f02d75e204ed248460157c9f9bd2932e"} Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.113600 4793 generic.go:334] "Generic (PLEG): container finished" podID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerID="4eea34353468e7b48cc7a2b7e05df1b19511a82085c8f2adf2ba94e4764bc33e" exitCode=0 Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.113620 4793 generic.go:334] "Generic (PLEG): container finished" podID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerID="b68b41c83a25ce40914355b04f296d07cb763ba1b3cf6b31c3970b27a2f376fd" exitCode=143 Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.113650 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47d5b1a9-edbe-4b43-8395-cb1fa337ad28","Type":"ContainerDied","Data":"4eea34353468e7b48cc7a2b7e05df1b19511a82085c8f2adf2ba94e4764bc33e"} Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.113666 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47d5b1a9-edbe-4b43-8395-cb1fa337ad28","Type":"ContainerDied","Data":"b68b41c83a25ce40914355b04f296d07cb763ba1b3cf6b31c3970b27a2f376fd"} Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.115110 4793 generic.go:334] "Generic (PLEG): container finished" podID="8195589a-9117-4f82-875b-1e0deec11c01" containerID="c0abfc20236991093d7e8e2afcdd95243ff40e4122ba5c47744049c4a654a438" exitCode=0 Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.115134 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p79cl" event={"ID":"8195589a-9117-4f82-875b-1e0deec11c01","Type":"ContainerDied","Data":"c0abfc20236991093d7e8e2afcdd95243ff40e4122ba5c47744049c4a654a438"} Jan 30 14:06:12 crc kubenswrapper[4793]: I0130 14:06:12.413249 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:06:12 crc kubenswrapper[4793]: I0130 14:06:12.413844 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:06:16 crc kubenswrapper[4793]: I0130 14:06:16.809202 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tp7zf" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 30 14:06:18 crc kubenswrapper[4793]: I0130 14:06:18.829362 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" containerName="galera" probeResult="failure" output="command timed out" Jan 30 14:06:18 crc kubenswrapper[4793]: I0130 14:06:18.848462 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" containerName="galera" probeResult="failure" output="command timed out" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.456990 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.461863 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.641810 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-dns-svc\") pod \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642237 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74tsm\" (UniqueName: \"kubernetes.io/projected/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-kube-api-access-74tsm\") pod \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642333 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-logs\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642356 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-httpd-run\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642384 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-sb\") pod \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642403 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-config-data\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642461 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-config\") pod \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642484 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-nb\") pod \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642545 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642567 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9szd\" (UniqueName: \"kubernetes.io/projected/95da467e-d092-4859-b82e-669b122856c9-kube-api-access-v9szd\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642599 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-scripts\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642633 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-combined-ca-bundle\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.646958 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-logs" (OuterVolumeSpecName: "logs") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.647246 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.688222 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.688386 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95da467e-d092-4859-b82e-669b122856c9-kube-api-access-v9szd" (OuterVolumeSpecName: "kube-api-access-v9szd") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "kube-api-access-v9szd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.695332 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-scripts" (OuterVolumeSpecName: "scripts") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.721337 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-kube-api-access-74tsm" (OuterVolumeSpecName: "kube-api-access-74tsm") pod "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" (UID: "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1"). InnerVolumeSpecName "kube-api-access-74tsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.745774 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.745806 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9szd\" (UniqueName: \"kubernetes.io/projected/95da467e-d092-4859-b82e-669b122856c9-kube-api-access-v9szd\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.745820 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.745831 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74tsm\" (UniqueName: \"kubernetes.io/projected/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-kube-api-access-74tsm\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.745843 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.745853 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.776658 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.835424 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.846938 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.846965 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.869624 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-config-data" (OuterVolumeSpecName: "config-data") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.872499 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" (UID: "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.883346 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-config" (OuterVolumeSpecName: "config") pod "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" (UID: "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.887467 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" (UID: "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.888023 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" (UID: "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.948846 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.948882 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.948894 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.948905 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.948916 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.074617 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.074775 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f9h5b6h587h649h698h55hddh65bh578h55chfhf9h66fh85h79h8dhffh585h67ch87h55dh5b9h5d7h65h577h5d5hdh685h669h64ch559h5d9q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wstbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6b66cd9fcf-c94kp_openstack(ecab991a-220f-4b09-a1fa-f43fef3d0be5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.077821 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.255980 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95da467e-d092-4859-b82e-669b122856c9","Type":"ContainerDied","Data":"bb1822de99167e67b698d62e79b73155f8af99f3f73a4a9033d2f811e3931452"} Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.256033 4793 scope.go:117] "RemoveContainer" containerID="7d4ae9a017860f2c49c7a68d93ab79a59e3223d425104405ff48022e02c702d7" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.256682 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.257896 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tp7zf" event={"ID":"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1","Type":"ContainerDied","Data":"d3a25e8a3b91c8c4040360de5d0cfe31c348e5b8ddffa9f734cc6f66d6f94231"} Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.259237 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.263135 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.318620 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tp7zf"] Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.324679 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tp7zf"] Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.331916 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.338713 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.349807 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.350309 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-httpd" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350331 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-httpd" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.350376 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="dnsmasq-dns" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350385 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="dnsmasq-dns" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.350400 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-log" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350408 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-log" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.350418 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="init" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350426 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="init" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350637 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-log" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350671 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="dnsmasq-dns" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350687 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-httpd" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.351831 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.355213 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.355421 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.363430 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.427365 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" path="/var/lib/kubelet/pods/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1/volumes" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.427980 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95da467e-d092-4859-b82e-669b122856c9" path="/var/lib/kubelet/pods/95da467e-d092-4859-b82e-669b122856c9/volumes" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.458953 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459020 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459061 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-logs\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459115 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459160 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459198 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-scripts\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459224 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44tdd\" (UniqueName: \"kubernetes.io/projected/afd812b0-55db-4cff-b0cd-4b18afe5a4be-kube-api-access-44tdd\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459267 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-config-data\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.560723 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-scripts\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.561557 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44tdd\" (UniqueName: \"kubernetes.io/projected/afd812b0-55db-4cff-b0cd-4b18afe5a4be-kube-api-access-44tdd\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.561897 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-config-data\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.561928 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.561987 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.562015 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-logs\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.562089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.562159 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.562593 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.562799 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-logs\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.563326 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.566017 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-scripts\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.566252 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.566336 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.567854 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-config-data\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.589855 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44tdd\" (UniqueName: \"kubernetes.io/projected/afd812b0-55db-4cff-b0cd-4b18afe5a4be-kube-api-access-44tdd\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.636391 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.723631 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:21 crc kubenswrapper[4793]: I0130 14:06:21.810081 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tp7zf" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 30 14:06:30 crc kubenswrapper[4793]: I0130 14:06:30.237207 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:30 crc kubenswrapper[4793]: I0130 14:06:30.238136 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:30 crc kubenswrapper[4793]: E0130 14:06:30.521715 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 14:06:30 crc kubenswrapper[4793]: E0130 14:06:30.522267 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dhb8h655h64bh686h97h5d4h644h68h648h77hf5h57h656h64fh585h59fh77h5fh688h5cch55hc7h5d7h648h699h66ch5f7h66h58fh55h599q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vtnhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-787bd77877-l9df5_openstack(4bd63ed1-4883-41ca-b7bb-f23bb10f5c88): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:06:30 crc kubenswrapper[4793]: E0130 14:06:30.525598 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-787bd77877-l9df5" podUID="4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" Jan 30 14:06:30 crc kubenswrapper[4793]: E0130 14:06:30.987496 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 14:06:30 crc kubenswrapper[4793]: E0130 14:06:30.987769 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sr8nv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-gpt4t_openstack(126207f4-9b13-4892-aa15-0616a488af8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:06:30 crc kubenswrapper[4793]: E0130 14:06:30.989147 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-gpt4t" podUID="126207f4-9b13-4892-aa15-0616a488af8c" Jan 30 14:06:31 crc kubenswrapper[4793]: E0130 14:06:31.029596 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 14:06:31 crc kubenswrapper[4793]: E0130 14:06:31.029810 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n77hd5h5cfh55dh689h654hc4h664h5f5h566h657h576h647hcfh687h96h5fch5dch66hb6h686h59h5cch688h594h654hbbh5dbh57h5f5h66bhfdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mcgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-8698dbdc7f-7rwcn_openstack(1f30f95a-540c-4e30-acce-229ae81b4215): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:06:31 crc kubenswrapper[4793]: E0130 14:06:31.036284 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-8698dbdc7f-7rwcn" podUID="1f30f95a-540c-4e30-acce-229ae81b4215" Jan 30 14:06:31 crc kubenswrapper[4793]: E0130 14:06:31.356817 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-gpt4t" podUID="126207f4-9b13-4892-aa15-0616a488af8c" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.791426 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.798518 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860544 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-fernet-keys\") pod \"8195589a-9117-4f82-875b-1e0deec11c01\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860597 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860637 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-scripts\") pod \"8195589a-9117-4f82-875b-1e0deec11c01\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860674 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-combined-ca-bundle\") pod \"8195589a-9117-4f82-875b-1e0deec11c01\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860709 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-config-data\") pod \"8195589a-9117-4f82-875b-1e0deec11c01\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860730 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-credential-keys\") pod \"8195589a-9117-4f82-875b-1e0deec11c01\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860755 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt5hf\" (UniqueName: \"kubernetes.io/projected/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-kube-api-access-tt5hf\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860778 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-scripts\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860853 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-logs\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860874 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-config-data\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860902 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-combined-ca-bundle\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860940 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5mrl\" (UniqueName: \"kubernetes.io/projected/8195589a-9117-4f82-875b-1e0deec11c01-kube-api-access-t5mrl\") pod \"8195589a-9117-4f82-875b-1e0deec11c01\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860972 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-httpd-run\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.861552 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.861748 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-logs" (OuterVolumeSpecName: "logs") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.864624 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8195589a-9117-4f82-875b-1e0deec11c01" (UID: "8195589a-9117-4f82-875b-1e0deec11c01"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.865332 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-scripts" (OuterVolumeSpecName: "scripts") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.865506 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-scripts" (OuterVolumeSpecName: "scripts") pod "8195589a-9117-4f82-875b-1e0deec11c01" (UID: "8195589a-9117-4f82-875b-1e0deec11c01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.868151 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8195589a-9117-4f82-875b-1e0deec11c01-kube-api-access-t5mrl" (OuterVolumeSpecName: "kube-api-access-t5mrl") pod "8195589a-9117-4f82-875b-1e0deec11c01" (UID: "8195589a-9117-4f82-875b-1e0deec11c01"). InnerVolumeSpecName "kube-api-access-t5mrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.879447 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-kube-api-access-tt5hf" (OuterVolumeSpecName: "kube-api-access-tt5hf") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "kube-api-access-tt5hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.879611 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8195589a-9117-4f82-875b-1e0deec11c01" (UID: "8195589a-9117-4f82-875b-1e0deec11c01"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.879877 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.892830 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8195589a-9117-4f82-875b-1e0deec11c01" (UID: "8195589a-9117-4f82-875b-1e0deec11c01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.915021 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.922182 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-config-data" (OuterVolumeSpecName: "config-data") pod "8195589a-9117-4f82-875b-1e0deec11c01" (UID: "8195589a-9117-4f82-875b-1e0deec11c01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.940344 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-config-data" (OuterVolumeSpecName: "config-data") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965159 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5mrl\" (UniqueName: \"kubernetes.io/projected/8195589a-9117-4f82-875b-1e0deec11c01-kube-api-access-t5mrl\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965194 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965221 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965265 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965278 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965288 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965298 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965308 4793 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965319 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt5hf\" (UniqueName: \"kubernetes.io/projected/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-kube-api-access-tt5hf\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965331 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965341 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965352 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965363 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.000926 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.066488 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.440364 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p79cl" event={"ID":"8195589a-9117-4f82-875b-1e0deec11c01","Type":"ContainerDied","Data":"0235cbe667410a12fd0f43900b65c18ce6c6b1f1487e76a077fc7aad8e3b66de"} Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.440630 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0235cbe667410a12fd0f43900b65c18ce6c6b1f1487e76a077fc7aad8e3b66de" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.440376 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.441908 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47d5b1a9-edbe-4b43-8395-cb1fa337ad28","Type":"ContainerDied","Data":"75d5f63d74ded6af6fe90efd5846a2c83282bfbfb878df2f8d8cd8df32ecf051"} Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.441969 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.485242 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.499825 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.542641 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:41 crc kubenswrapper[4793]: E0130 14:06:41.543112 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8195589a-9117-4f82-875b-1e0deec11c01" containerName="keystone-bootstrap" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.543129 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8195589a-9117-4f82-875b-1e0deec11c01" containerName="keystone-bootstrap" Jan 30 14:06:41 crc kubenswrapper[4793]: E0130 14:06:41.543144 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-httpd" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.543167 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-httpd" Jan 30 14:06:41 crc kubenswrapper[4793]: E0130 14:06:41.543204 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-log" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.543210 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-log" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.543355 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-httpd" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.543370 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8195589a-9117-4f82-875b-1e0deec11c01" containerName="keystone-bootstrap" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.543384 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-log" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.545686 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.552387 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.553289 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.640991 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694395 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-logs\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694496 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694565 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhczv\" (UniqueName: \"kubernetes.io/projected/5559c03d-3177-4b79-9d5b-4272abb3332c-kube-api-access-mhczv\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694591 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694612 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694630 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694647 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694679 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.796787 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.797619 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhczv\" (UniqueName: \"kubernetes.io/projected/5559c03d-3177-4b79-9d5b-4272abb3332c-kube-api-access-mhczv\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.797753 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.797864 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.797979 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.798108 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.798254 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.798412 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-logs\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.798657 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.798766 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.799125 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-logs\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.804222 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.808486 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.810181 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.820410 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhczv\" (UniqueName: \"kubernetes.io/projected/5559c03d-3177-4b79-9d5b-4272abb3332c-kube-api-access-mhczv\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.824520 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.829593 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.903040 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-p79cl"] Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.911669 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-p79cl"] Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.949233 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.017385 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-k4pgl"] Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.018487 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.021194 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.021413 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.021462 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.023074 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nv6pf" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.024946 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.039008 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k4pgl"] Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.205502 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-combined-ca-bundle\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.206318 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-669j6\" (UniqueName: \"kubernetes.io/projected/b8ea0161-c696-4578-a6f7-285a4253dc0f-kube-api-access-669j6\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.206363 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-credential-keys\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.206392 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-fernet-keys\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.206411 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-config-data\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.206456 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-scripts\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: E0130 14:06:42.219708 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 30 14:06:42 crc kubenswrapper[4793]: E0130 14:06:42.219862 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5ffh654hddhf7h5f8h678h689h64bh575h584h58ch67bh555h568h65dh5cdh5b9hf4hdh669h59fh8bh67dh568hd4h6ch595hdh548h97h644h68dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sld6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f85d7b0d-5452-4175-842b-7d1505eb82e0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.269064 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.277761 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.307704 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-combined-ca-bundle\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.307829 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-669j6\" (UniqueName: \"kubernetes.io/projected/b8ea0161-c696-4578-a6f7-285a4253dc0f-kube-api-access-669j6\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.307865 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-credential-keys\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.307904 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-fernet-keys\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.307934 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-config-data\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.308008 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-scripts\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.313916 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-scripts\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.314125 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-combined-ca-bundle\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.314527 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-config-data\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.323869 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-credential-keys\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.326668 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-fernet-keys\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.328812 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-669j6\" (UniqueName: \"kubernetes.io/projected/b8ea0161-c696-4578-a6f7-285a4253dc0f-kube-api-access-669j6\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.377481 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.409099 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f30f95a-540c-4e30-acce-229ae81b4215-logs\") pod \"1f30f95a-540c-4e30-acce-229ae81b4215\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.409161 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-config-data\") pod \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410164 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f30f95a-540c-4e30-acce-229ae81b4215-horizon-secret-key\") pod \"1f30f95a-540c-4e30-acce-229ae81b4215\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410219 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-config-data" (OuterVolumeSpecName: "config-data") pod "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" (UID: "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410259 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-horizon-secret-key\") pod \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410334 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-scripts\") pod \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410403 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-logs\") pod \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410469 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mcgn\" (UniqueName: \"kubernetes.io/projected/1f30f95a-540c-4e30-acce-229ae81b4215-kube-api-access-7mcgn\") pod \"1f30f95a-540c-4e30-acce-229ae81b4215\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410532 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-scripts\") pod \"1f30f95a-540c-4e30-acce-229ae81b4215\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410596 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-config-data\") pod \"1f30f95a-540c-4e30-acce-229ae81b4215\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410624 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtnhg\" (UniqueName: \"kubernetes.io/projected/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-kube-api-access-vtnhg\") pod \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410832 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-scripts" (OuterVolumeSpecName: "scripts") pod "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" (UID: "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.411302 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.411310 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-logs" (OuterVolumeSpecName: "logs") pod "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" (UID: "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.411323 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.411661 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-config-data" (OuterVolumeSpecName: "config-data") pod "1f30f95a-540c-4e30-acce-229ae81b4215" (UID: "1f30f95a-540c-4e30-acce-229ae81b4215"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.411899 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-scripts" (OuterVolumeSpecName: "scripts") pod "1f30f95a-540c-4e30-acce-229ae81b4215" (UID: "1f30f95a-540c-4e30-acce-229ae81b4215"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.412237 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f30f95a-540c-4e30-acce-229ae81b4215-logs" (OuterVolumeSpecName: "logs") pod "1f30f95a-540c-4e30-acce-229ae81b4215" (UID: "1f30f95a-540c-4e30-acce-229ae81b4215"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.413418 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" (UID: "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.413727 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.413774 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.414485 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" path="/var/lib/kubelet/pods/47d5b1a9-edbe-4b43-8395-cb1fa337ad28/volumes" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.415168 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8195589a-9117-4f82-875b-1e0deec11c01" path="/var/lib/kubelet/pods/8195589a-9117-4f82-875b-1e0deec11c01/volumes" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.416485 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f30f95a-540c-4e30-acce-229ae81b4215-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1f30f95a-540c-4e30-acce-229ae81b4215" (UID: "1f30f95a-540c-4e30-acce-229ae81b4215"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.416519 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f30f95a-540c-4e30-acce-229ae81b4215-kube-api-access-7mcgn" (OuterVolumeSpecName: "kube-api-access-7mcgn") pod "1f30f95a-540c-4e30-acce-229ae81b4215" (UID: "1f30f95a-540c-4e30-acce-229ae81b4215"). InnerVolumeSpecName "kube-api-access-7mcgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.419347 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-kube-api-access-vtnhg" (OuterVolumeSpecName: "kube-api-access-vtnhg") pod "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" (UID: "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88"). InnerVolumeSpecName "kube-api-access-vtnhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.449548 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.453224 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.470347 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-787bd77877-l9df5" event={"ID":"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88","Type":"ContainerDied","Data":"0b3a3424f23b7d6c10b04af0639314688a591e4cf45a995b12aa2a751c3d037b"} Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.470405 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.470419 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8698dbdc7f-7rwcn" event={"ID":"1f30f95a-540c-4e30-acce-229ae81b4215","Type":"ContainerDied","Data":"195ee6e5e0794333cda4ea233faeb9fe7d4329bd8a1e2d492ad5c4a6790f9c89"} Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.470868 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f37b4adcd989135b3a0199183c5b09641f48fc83f250e8154636cac5c1ad21e6"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.470929 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://f37b4adcd989135b3a0199183c5b09641f48fc83f250e8154636cac5c1ad21e6" gracePeriod=600 Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513305 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mcgn\" (UniqueName: \"kubernetes.io/projected/1f30f95a-540c-4e30-acce-229ae81b4215-kube-api-access-7mcgn\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513349 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513363 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513373 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtnhg\" (UniqueName: \"kubernetes.io/projected/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-kube-api-access-vtnhg\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513383 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f30f95a-540c-4e30-acce-229ae81b4215-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513394 4793 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f30f95a-540c-4e30-acce-229ae81b4215-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513404 4793 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513414 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.538746 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8698dbdc7f-7rwcn"] Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.562115 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8698dbdc7f-7rwcn"] Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.579834 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-787bd77877-l9df5"] Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.588919 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-787bd77877-l9df5"] Jan 30 14:06:43 crc kubenswrapper[4793]: I0130 14:06:43.463132 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="f37b4adcd989135b3a0199183c5b09641f48fc83f250e8154636cac5c1ad21e6" exitCode=0 Jan 30 14:06:43 crc kubenswrapper[4793]: I0130 14:06:43.463178 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"f37b4adcd989135b3a0199183c5b09641f48fc83f250e8154636cac5c1ad21e6"} Jan 30 14:06:44 crc kubenswrapper[4793]: I0130 14:06:44.425278 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f30f95a-540c-4e30-acce-229ae81b4215" path="/var/lib/kubelet/pods/1f30f95a-540c-4e30-acce-229ae81b4215/volumes" Jan 30 14:06:44 crc kubenswrapper[4793]: I0130 14:06:44.425993 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" path="/var/lib/kubelet/pods/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88/volumes" Jan 30 14:06:45 crc kubenswrapper[4793]: I0130 14:06:45.526328 4793 scope.go:117] "RemoveContainer" containerID="40b8e80d53a26f06d0539ee09f487d43f02d75e204ed248460157c9f9bd2932e" Jan 30 14:06:45 crc kubenswrapper[4793]: E0130 14:06:45.703787 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 14:06:45 crc kubenswrapper[4793]: E0130 14:06:45.704115 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkv5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4rknj_openstack(f55384b1-b1fd-43eb-8c8d-73398a8f2ecd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:06:45 crc kubenswrapper[4793]: E0130 14:06:45.705712 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4rknj" podUID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" Jan 30 14:06:45 crc kubenswrapper[4793]: I0130 14:06:45.736343 4793 scope.go:117] "RemoveContainer" containerID="610455f7ee877cbfe48a7dcf3922577b44a3ba262f3673e879a83bee7f9c298d" Jan 30 14:06:45 crc kubenswrapper[4793]: I0130 14:06:45.806356 4793 scope.go:117] "RemoveContainer" containerID="d2be4624f88c54b308ce347e2279d0b4015189b7a8bfe3be6bc12fc678ca01b1" Jan 30 14:06:45 crc kubenswrapper[4793]: I0130 14:06:45.952363 4793 scope.go:117] "RemoveContainer" containerID="4eea34353468e7b48cc7a2b7e05df1b19511a82085c8f2adf2ba94e4764bc33e" Jan 30 14:06:45 crc kubenswrapper[4793]: I0130 14:06:45.979540 4793 scope.go:117] "RemoveContainer" containerID="b68b41c83a25ce40914355b04f296d07cb763ba1b3cf6b31c3970b27a2f376fd" Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.044354 4793 scope.go:117] "RemoveContainer" containerID="2d2487d42ac1676516749d1fe7d34e7f815543009b077aded1798d3fcce33e28" Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.087569 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b9fc5f8f6-nj7xv"] Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.157490 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k4pgl"] Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.265204 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:46 crc kubenswrapper[4793]: W0130 14:06:46.302848 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafd812b0_55db_4cff_b0cd_4b18afe5a4be.slice/crio-2863a64e0737f90ead25e88cb3e95128501f7112f292e0e206879eebe7f45380 WatchSource:0}: Error finding container 2863a64e0737f90ead25e88cb3e95128501f7112f292e0e206879eebe7f45380: Status 404 returned error can't find the container with id 2863a64e0737f90ead25e88cb3e95128501f7112f292e0e206879eebe7f45380 Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.503599 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afd812b0-55db-4cff-b0cd-4b18afe5a4be","Type":"ContainerStarted","Data":"2863a64e0737f90ead25e88cb3e95128501f7112f292e0e206879eebe7f45380"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.507978 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.512102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kkrt6" event={"ID":"644bf4c3-aaaf-45fa-9692-73406a657226","Type":"ContainerStarted","Data":"32ceb7dc9fa876395c4ca9e0e8f70660c79f4304088a586ce49eb1e832993592"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.524576 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerStarted","Data":"448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.555614 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k4pgl" event={"ID":"b8ea0161-c696-4578-a6f7-285a4253dc0f","Type":"ContainerStarted","Data":"bff2e9040ab8d382d57ee633ed0d4b720e96e3be65ded6621d8b7a51d1e715d7"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.555663 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k4pgl" event={"ID":"b8ea0161-c696-4578-a6f7-285a4253dc0f","Type":"ContainerStarted","Data":"0b200ff63984e55abb5a41c94824217395ef35be23e2a95f9d4f2e58ad8bd186"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.567128 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gpt4t" event={"ID":"126207f4-9b13-4892-aa15-0616a488af8c","Type":"ContainerStarted","Data":"f6239492972507362decef8f67d6e0f6bc2cfcc0fcc4cf32f831f0f6c07c0017"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.575914 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-kkrt6" podStartSLOduration=8.656068344 podStartE2EDuration="51.575896914s" podCreationTimestamp="2026-01-30 14:05:55 +0000 UTC" firstStartedPulling="2026-01-30 14:05:57.783712077 +0000 UTC m=+1368.485060568" lastFinishedPulling="2026-01-30 14:06:40.703540637 +0000 UTC m=+1411.404889138" observedRunningTime="2026-01-30 14:06:46.543463628 +0000 UTC m=+1417.244812119" watchObservedRunningTime="2026-01-30 14:06:46.575896914 +0000 UTC m=+1417.277245405" Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.598946 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-k4pgl" podStartSLOduration=5.598924448 podStartE2EDuration="5.598924448s" podCreationTimestamp="2026-01-30 14:06:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:46.576835886 +0000 UTC m=+1417.278184387" watchObservedRunningTime="2026-01-30 14:06:46.598924448 +0000 UTC m=+1417.300272939" Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.613762 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-gpt4t" podStartSLOduration=2.9693119169999997 podStartE2EDuration="51.613743342s" podCreationTimestamp="2026-01-30 14:05:55 +0000 UTC" firstStartedPulling="2026-01-30 14:05:57.420993644 +0000 UTC m=+1368.122342135" lastFinishedPulling="2026-01-30 14:06:46.065425069 +0000 UTC m=+1416.766773560" observedRunningTime="2026-01-30 14:06:46.601072151 +0000 UTC m=+1417.302420662" watchObservedRunningTime="2026-01-30 14:06:46.613743342 +0000 UTC m=+1417.315091833" Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.638549 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerStarted","Data":"17ee0f9e22a0cd0fff96008213438a2b5b0d6d932c5a2867f0d0bea08e359ce1"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.638585 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerStarted","Data":"871fa7f802447852caa160c3d80754a40a8cf65dbdd07bec10a4f92b76ebe1b3"} Jan 30 14:06:46 crc kubenswrapper[4793]: E0130 14:06:46.644088 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-4rknj" podUID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.991664 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:47 crc kubenswrapper[4793]: I0130 14:06:47.650643 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5559c03d-3177-4b79-9d5b-4272abb3332c","Type":"ContainerStarted","Data":"70a9907e2896545270e49ea508b4c54cd74205507f20d607e118c4c1d4eb4471"} Jan 30 14:06:47 crc kubenswrapper[4793]: I0130 14:06:47.653558 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afd812b0-55db-4cff-b0cd-4b18afe5a4be","Type":"ContainerStarted","Data":"d6909ec1b1d6acd6ea51f39341116d0dc581b2cb648e5824a50f0830c242d28c"} Jan 30 14:06:47 crc kubenswrapper[4793]: I0130 14:06:47.656472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerStarted","Data":"f596f8243d020ebc541370451531edeb9f8ca985e2b5b436a6b072092db3b9f8"} Jan 30 14:06:47 crc kubenswrapper[4793]: I0130 14:06:47.659401 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerStarted","Data":"dff5cd3a5cfaef3ae4c87e55c3563d4578820a2c23ec2494ebf248940d3816d8"} Jan 30 14:06:47 crc kubenswrapper[4793]: I0130 14:06:47.725729 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podStartSLOduration=38.725708532 podStartE2EDuration="38.725708532s" podCreationTimestamp="2026-01-30 14:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:47.691218636 +0000 UTC m=+1418.392567127" watchObservedRunningTime="2026-01-30 14:06:47.725708532 +0000 UTC m=+1418.427057023" Jan 30 14:06:47 crc kubenswrapper[4793]: I0130 14:06:47.727019 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6b66cd9fcf-c94kp" podStartSLOduration=3.427715209 podStartE2EDuration="48.727012814s" podCreationTimestamp="2026-01-30 14:05:59 +0000 UTC" firstStartedPulling="2026-01-30 14:06:00.872068321 +0000 UTC m=+1371.573416812" lastFinishedPulling="2026-01-30 14:06:46.171365926 +0000 UTC m=+1416.872714417" observedRunningTime="2026-01-30 14:06:47.723380895 +0000 UTC m=+1418.424729396" watchObservedRunningTime="2026-01-30 14:06:47.727012814 +0000 UTC m=+1418.428361305" Jan 30 14:06:48 crc kubenswrapper[4793]: I0130 14:06:48.671331 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5559c03d-3177-4b79-9d5b-4272abb3332c","Type":"ContainerStarted","Data":"dcaeea7ba1cea9514200e8739efe0c1afeee2c3dce2b9b6f14b9679193172dd8"} Jan 30 14:06:48 crc kubenswrapper[4793]: I0130 14:06:48.672562 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerStarted","Data":"b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433"} Jan 30 14:06:49 crc kubenswrapper[4793]: I0130 14:06:49.608693 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:06:49 crc kubenswrapper[4793]: I0130 14:06:49.608938 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:06:49 crc kubenswrapper[4793]: I0130 14:06:49.683016 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afd812b0-55db-4cff-b0cd-4b18afe5a4be","Type":"ContainerStarted","Data":"7fcd99ccac2b000f72be7038dcce1804ca999ec354f3fa50a7ce90a221f56951"} Jan 30 14:06:49 crc kubenswrapper[4793]: I0130 14:06:49.710984 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=29.710965853 podStartE2EDuration="29.710965853s" podCreationTimestamp="2026-01-30 14:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:49.700796483 +0000 UTC m=+1420.402144984" watchObservedRunningTime="2026-01-30 14:06:49.710965853 +0000 UTC m=+1420.412314334" Jan 30 14:06:49 crc kubenswrapper[4793]: I0130 14:06:49.831482 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:49 crc kubenswrapper[4793]: I0130 14:06:49.831714 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.695907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5559c03d-3177-4b79-9d5b-4272abb3332c","Type":"ContainerStarted","Data":"031f50784319cac124ddf65fb3b891ec178d8cabb6114ad6fed4b24cfd5aa170"} Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.725313 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.725371 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.725384 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.725518 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.912234 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.923137 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 14:06:51 crc kubenswrapper[4793]: I0130 14:06:51.746730 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.746706636999999 podStartE2EDuration="10.746706637s" podCreationTimestamp="2026-01-30 14:06:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:51.739668237 +0000 UTC m=+1422.441016738" watchObservedRunningTime="2026-01-30 14:06:51.746706637 +0000 UTC m=+1422.448055118" Jan 30 14:06:51 crc kubenswrapper[4793]: I0130 14:06:51.950469 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:51 crc kubenswrapper[4793]: I0130 14:06:51.950516 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:51 crc kubenswrapper[4793]: I0130 14:06:51.976223 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:51 crc kubenswrapper[4793]: I0130 14:06:51.989439 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:52 crc kubenswrapper[4793]: I0130 14:06:52.727690 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:52 crc kubenswrapper[4793]: I0130 14:06:52.728088 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:59 crc kubenswrapper[4793]: I0130 14:06:59.610923 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.146:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8080: connect: connection refused" Jan 30 14:06:59 crc kubenswrapper[4793]: I0130 14:06:59.834304 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 14:07:01 crc kubenswrapper[4793]: I0130 14:07:01.813389 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerStarted","Data":"1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b"} Jan 30 14:07:01 crc kubenswrapper[4793]: I0130 14:07:01.814652 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4rknj" event={"ID":"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd","Type":"ContainerStarted","Data":"ae10414b3d00dc4ceb2bc58d35069ffd261cdc4f3583eb5ebdf5decfcf70c2e6"} Jan 30 14:07:01 crc kubenswrapper[4793]: I0130 14:07:01.837486 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4rknj" podStartSLOduration=3.996604704 podStartE2EDuration="1m6.837466123s" podCreationTimestamp="2026-01-30 14:05:55 +0000 UTC" firstStartedPulling="2026-01-30 14:05:57.54971981 +0000 UTC m=+1368.251068301" lastFinishedPulling="2026-01-30 14:07:00.390581229 +0000 UTC m=+1431.091929720" observedRunningTime="2026-01-30 14:07:01.83073473 +0000 UTC m=+1432.532083231" watchObservedRunningTime="2026-01-30 14:07:01.837466123 +0000 UTC m=+1432.538814614" Jan 30 14:07:04 crc kubenswrapper[4793]: I0130 14:07:04.678499 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 14:07:04 crc kubenswrapper[4793]: I0130 14:07:04.686495 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 14:07:04 crc kubenswrapper[4793]: I0130 14:07:04.715961 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 14:07:04 crc kubenswrapper[4793]: I0130 14:07:04.865916 4793 generic.go:334] "Generic (PLEG): container finished" podID="b8ea0161-c696-4578-a6f7-285a4253dc0f" containerID="bff2e9040ab8d382d57ee633ed0d4b720e96e3be65ded6621d8b7a51d1e715d7" exitCode=0 Jan 30 14:07:04 crc kubenswrapper[4793]: I0130 14:07:04.866810 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k4pgl" event={"ID":"b8ea0161-c696-4578-a6f7-285a4253dc0f","Type":"ContainerDied","Data":"bff2e9040ab8d382d57ee633ed0d4b720e96e3be65ded6621d8b7a51d1e715d7"} Jan 30 14:07:05 crc kubenswrapper[4793]: I0130 14:07:05.874903 4793 generic.go:334] "Generic (PLEG): container finished" podID="644bf4c3-aaaf-45fa-9692-73406a657226" containerID="32ceb7dc9fa876395c4ca9e0e8f70660c79f4304088a586ce49eb1e832993592" exitCode=0 Jan 30 14:07:05 crc kubenswrapper[4793]: I0130 14:07:05.874983 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kkrt6" event={"ID":"644bf4c3-aaaf-45fa-9692-73406a657226","Type":"ContainerDied","Data":"32ceb7dc9fa876395c4ca9e0e8f70660c79f4304088a586ce49eb1e832993592"} Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.264883 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.382643 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-fernet-keys\") pod \"b8ea0161-c696-4578-a6f7-285a4253dc0f\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.382759 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-credential-keys\") pod \"b8ea0161-c696-4578-a6f7-285a4253dc0f\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.382814 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-config-data\") pod \"b8ea0161-c696-4578-a6f7-285a4253dc0f\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.382863 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-combined-ca-bundle\") pod \"b8ea0161-c696-4578-a6f7-285a4253dc0f\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.383666 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-669j6\" (UniqueName: \"kubernetes.io/projected/b8ea0161-c696-4578-a6f7-285a4253dc0f-kube-api-access-669j6\") pod \"b8ea0161-c696-4578-a6f7-285a4253dc0f\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.383783 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-scripts\") pod \"b8ea0161-c696-4578-a6f7-285a4253dc0f\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.399446 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-scripts" (OuterVolumeSpecName: "scripts") pod "b8ea0161-c696-4578-a6f7-285a4253dc0f" (UID: "b8ea0161-c696-4578-a6f7-285a4253dc0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.400940 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b8ea0161-c696-4578-a6f7-285a4253dc0f" (UID: "b8ea0161-c696-4578-a6f7-285a4253dc0f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.407326 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ea0161-c696-4578-a6f7-285a4253dc0f-kube-api-access-669j6" (OuterVolumeSpecName: "kube-api-access-669j6") pod "b8ea0161-c696-4578-a6f7-285a4253dc0f" (UID: "b8ea0161-c696-4578-a6f7-285a4253dc0f"). InnerVolumeSpecName "kube-api-access-669j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.422207 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8ea0161-c696-4578-a6f7-285a4253dc0f" (UID: "b8ea0161-c696-4578-a6f7-285a4253dc0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.422773 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b8ea0161-c696-4578-a6f7-285a4253dc0f" (UID: "b8ea0161-c696-4578-a6f7-285a4253dc0f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.448531 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-config-data" (OuterVolumeSpecName: "config-data") pod "b8ea0161-c696-4578-a6f7-285a4253dc0f" (UID: "b8ea0161-c696-4578-a6f7-285a4253dc0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.486645 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.486687 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.486703 4793 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.486719 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.486733 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.486746 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-669j6\" (UniqueName: \"kubernetes.io/projected/b8ea0161-c696-4578-a6f7-285a4253dc0f-kube-api-access-669j6\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.885338 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.885336 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k4pgl" event={"ID":"b8ea0161-c696-4578-a6f7-285a4253dc0f","Type":"ContainerDied","Data":"0b200ff63984e55abb5a41c94824217395ef35be23e2a95f9d4f2e58ad8bd186"} Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.885471 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b200ff63984e55abb5a41c94824217395ef35be23e2a95f9d4f2e58ad8bd186" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.927939 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.036672 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d689db86f-zslsz"] Jan 30 14:07:07 crc kubenswrapper[4793]: E0130 14:07:07.037130 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ea0161-c696-4578-a6f7-285a4253dc0f" containerName="keystone-bootstrap" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.037146 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ea0161-c696-4578-a6f7-285a4253dc0f" containerName="keystone-bootstrap" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.037288 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ea0161-c696-4578-a6f7-285a4253dc0f" containerName="keystone-bootstrap" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.037791 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.046860 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nv6pf" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.047058 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.047145 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.047228 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.047308 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.047387 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.068983 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d689db86f-zslsz"] Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.104954 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-combined-ca-bundle\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105067 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-internal-tls-certs\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105139 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-fernet-keys\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105215 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b8fl\" (UniqueName: \"kubernetes.io/projected/0ed57c3d-4992-4cfa-8655-1587b5897df6-kube-api-access-5b8fl\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105244 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-scripts\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105280 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-config-data\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105340 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-public-tls-certs\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105366 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-credential-keys\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209285 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b8fl\" (UniqueName: \"kubernetes.io/projected/0ed57c3d-4992-4cfa-8655-1587b5897df6-kube-api-access-5b8fl\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209357 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-scripts\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209409 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-config-data\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209449 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-public-tls-certs\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209468 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-credential-keys\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209505 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-combined-ca-bundle\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209535 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-internal-tls-certs\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209562 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-fernet-keys\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.221276 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-scripts\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.224091 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-config-data\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.224096 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-combined-ca-bundle\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.224896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-public-tls-certs\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.225173 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-fernet-keys\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.227867 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-internal-tls-certs\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.229709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-credential-keys\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.253523 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b8fl\" (UniqueName: \"kubernetes.io/projected/0ed57c3d-4992-4cfa-8655-1587b5897df6-kube-api-access-5b8fl\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.373968 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:09 crc kubenswrapper[4793]: I0130 14:07:09.608883 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.146:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8080: connect: connection refused" Jan 30 14:07:09 crc kubenswrapper[4793]: I0130 14:07:09.832382 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 14:07:09 crc kubenswrapper[4793]: I0130 14:07:09.913205 4793 generic.go:334] "Generic (PLEG): container finished" podID="126207f4-9b13-4892-aa15-0616a488af8c" containerID="f6239492972507362decef8f67d6e0f6bc2cfcc0fcc4cf32f831f0f6c07c0017" exitCode=0 Jan 30 14:07:09 crc kubenswrapper[4793]: I0130 14:07:09.913270 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gpt4t" event={"ID":"126207f4-9b13-4892-aa15-0616a488af8c","Type":"ContainerDied","Data":"f6239492972507362decef8f67d6e0f6bc2cfcc0fcc4cf32f831f0f6c07c0017"} Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.425969 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kkrt6" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.470025 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-config-data\") pod \"644bf4c3-aaaf-45fa-9692-73406a657226\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.470529 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-combined-ca-bundle\") pod \"644bf4c3-aaaf-45fa-9692-73406a657226\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.470585 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-scripts\") pod \"644bf4c3-aaaf-45fa-9692-73406a657226\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.470626 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644bf4c3-aaaf-45fa-9692-73406a657226-logs\") pod \"644bf4c3-aaaf-45fa-9692-73406a657226\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.470657 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd7h4\" (UniqueName: \"kubernetes.io/projected/644bf4c3-aaaf-45fa-9692-73406a657226-kube-api-access-gd7h4\") pod \"644bf4c3-aaaf-45fa-9692-73406a657226\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.471891 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/644bf4c3-aaaf-45fa-9692-73406a657226-logs" (OuterVolumeSpecName: "logs") pod "644bf4c3-aaaf-45fa-9692-73406a657226" (UID: "644bf4c3-aaaf-45fa-9692-73406a657226"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.487810 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-scripts" (OuterVolumeSpecName: "scripts") pod "644bf4c3-aaaf-45fa-9692-73406a657226" (UID: "644bf4c3-aaaf-45fa-9692-73406a657226"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.488963 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644bf4c3-aaaf-45fa-9692-73406a657226-kube-api-access-gd7h4" (OuterVolumeSpecName: "kube-api-access-gd7h4") pod "644bf4c3-aaaf-45fa-9692-73406a657226" (UID: "644bf4c3-aaaf-45fa-9692-73406a657226"). InnerVolumeSpecName "kube-api-access-gd7h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.502639 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "644bf4c3-aaaf-45fa-9692-73406a657226" (UID: "644bf4c3-aaaf-45fa-9692-73406a657226"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.530000 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-config-data" (OuterVolumeSpecName: "config-data") pod "644bf4c3-aaaf-45fa-9692-73406a657226" (UID: "644bf4c3-aaaf-45fa-9692-73406a657226"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.573221 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.573262 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.573275 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.573285 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644bf4c3-aaaf-45fa-9692-73406a657226-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.573297 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd7h4\" (UniqueName: \"kubernetes.io/projected/644bf4c3-aaaf-45fa-9692-73406a657226-kube-api-access-gd7h4\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.924241 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kkrt6" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.924231 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kkrt6" event={"ID":"644bf4c3-aaaf-45fa-9692-73406a657226","Type":"ContainerDied","Data":"b3e8e1acd1cd561d606e595452b7ed4d9ad040eaf08a66d7af08e7308d6d261e"} Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.924371 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3e8e1acd1cd561d606e595452b7ed4d9ad040eaf08a66d7af08e7308d6d261e" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.622408 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-65f95549b8-wtpxl"] Jan 30 14:07:11 crc kubenswrapper[4793]: E0130 14:07:11.623507 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644bf4c3-aaaf-45fa-9692-73406a657226" containerName="placement-db-sync" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.623526 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="644bf4c3-aaaf-45fa-9692-73406a657226" containerName="placement-db-sync" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.623748 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="644bf4c3-aaaf-45fa-9692-73406a657226" containerName="placement-db-sync" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.624590 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.635397 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8krj5" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.635584 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.635742 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.635865 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.636505 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.663248 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65f95549b8-wtpxl"] Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700619 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-internal-tls-certs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700734 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-config-data\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700779 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52q49\" (UniqueName: \"kubernetes.io/projected/57bfc822-1d30-49bc-a077-686b68e9c1e6-kube-api-access-52q49\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700803 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-public-tls-certs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700925 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-scripts\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700953 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-combined-ca-bundle\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700985 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bfc822-1d30-49bc-a077-686b68e9c1e6-logs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.802726 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-scripts\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.803782 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-combined-ca-bundle\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.803881 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bfc822-1d30-49bc-a077-686b68e9c1e6-logs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.804029 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-internal-tls-certs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.804396 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bfc822-1d30-49bc-a077-686b68e9c1e6-logs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.804540 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-config-data\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.804696 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52q49\" (UniqueName: \"kubernetes.io/projected/57bfc822-1d30-49bc-a077-686b68e9c1e6-kube-api-access-52q49\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.804755 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-public-tls-certs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.809515 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-scripts\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.809681 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-internal-tls-certs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.822124 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-combined-ca-bundle\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.823794 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-config-data\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.823825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-public-tls-certs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.828653 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52q49\" (UniqueName: \"kubernetes.io/projected/57bfc822-1d30-49bc-a077-686b68e9c1e6-kube-api-access-52q49\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.947909 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:13 crc kubenswrapper[4793]: I0130 14:07:13.911825 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:07:13 crc kubenswrapper[4793]: I0130 14:07:13.987165 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gpt4t" event={"ID":"126207f4-9b13-4892-aa15-0616a488af8c","Type":"ContainerDied","Data":"951aaae1b3a62ddc2954a80d0b215b523c731d1bf004dc9a3391b04cbf64290b"} Jan 30 14:07:13 crc kubenswrapper[4793]: I0130 14:07:13.987415 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="951aaae1b3a62ddc2954a80d0b215b523c731d1bf004dc9a3391b04cbf64290b" Jan 30 14:07:13 crc kubenswrapper[4793]: I0130 14:07:13.987614 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.057250 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-db-sync-config-data\") pod \"126207f4-9b13-4892-aa15-0616a488af8c\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.057292 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr8nv\" (UniqueName: \"kubernetes.io/projected/126207f4-9b13-4892-aa15-0616a488af8c-kube-api-access-sr8nv\") pod \"126207f4-9b13-4892-aa15-0616a488af8c\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.057483 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-combined-ca-bundle\") pod \"126207f4-9b13-4892-aa15-0616a488af8c\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.062656 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "126207f4-9b13-4892-aa15-0616a488af8c" (UID: "126207f4-9b13-4892-aa15-0616a488af8c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.086242 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/126207f4-9b13-4892-aa15-0616a488af8c-kube-api-access-sr8nv" (OuterVolumeSpecName: "kube-api-access-sr8nv") pod "126207f4-9b13-4892-aa15-0616a488af8c" (UID: "126207f4-9b13-4892-aa15-0616a488af8c"). InnerVolumeSpecName "kube-api-access-sr8nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.112275 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "126207f4-9b13-4892-aa15-0616a488af8c" (UID: "126207f4-9b13-4892-aa15-0616a488af8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.159884 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.159916 4793 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.159926 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr8nv\" (UniqueName: \"kubernetes.io/projected/126207f4-9b13-4892-aa15-0616a488af8c-kube-api-access-sr8nv\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:14 crc kubenswrapper[4793]: W0130 14:07:14.353815 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57bfc822_1d30_49bc_a077_686b68e9c1e6.slice/crio-8241d78b09c1b96bd4873ccfc461532494b47b93d9baadfb67b18d99c4c94300 WatchSource:0}: Error finding container 8241d78b09c1b96bd4873ccfc461532494b47b93d9baadfb67b18d99c4c94300: Status 404 returned error can't find the container with id 8241d78b09c1b96bd4873ccfc461532494b47b93d9baadfb67b18d99c4c94300 Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.354959 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65f95549b8-wtpxl"] Jan 30 14:07:14 crc kubenswrapper[4793]: W0130 14:07:14.367003 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ed57c3d_4992_4cfa_8655_1587b5897df6.slice/crio-9da9be62ee33e3e755638eacd900313f352b976429d68344c5beb0852d0ecc28 WatchSource:0}: Error finding container 9da9be62ee33e3e755638eacd900313f352b976429d68344c5beb0852d0ecc28: Status 404 returned error can't find the container with id 9da9be62ee33e3e755638eacd900313f352b976429d68344c5beb0852d0ecc28 Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.370304 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d689db86f-zslsz"] Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.995384 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d689db86f-zslsz" event={"ID":"0ed57c3d-4992-4cfa-8655-1587b5897df6","Type":"ContainerStarted","Data":"9da9be62ee33e3e755638eacd900313f352b976429d68344c5beb0852d0ecc28"} Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.996714 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65f95549b8-wtpxl" event={"ID":"57bfc822-1d30-49bc-a077-686b68e9c1e6","Type":"ContainerStarted","Data":"8241d78b09c1b96bd4873ccfc461532494b47b93d9baadfb67b18d99c4c94300"} Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.105078 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vxdfs"] Jan 30 14:07:16 crc kubenswrapper[4793]: E0130 14:07:16.105845 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="126207f4-9b13-4892-aa15-0616a488af8c" containerName="barbican-db-sync" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.105862 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="126207f4-9b13-4892-aa15-0616a488af8c" containerName="barbican-db-sync" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.106134 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="126207f4-9b13-4892-aa15-0616a488af8c" containerName="barbican-db-sync" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.107275 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.112722 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vxdfs"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.207805 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6dd7f7f8-htnvl"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.209609 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.212875 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.213146 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.213279 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2b9wh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.227097 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-d78d76787-7f5jh"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.228380 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.235974 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.236925 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.236959 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.236988 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.237009 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.237084 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-745tx\" (UniqueName: \"kubernetes.io/projected/3ed51218-5677-4c7a-aeb6-1ec6c215178a-kube-api-access-745tx\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.237108 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-config\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.242245 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6dd7f7f8-htnvl"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.258486 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d78d76787-7f5jh"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.276995 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-56c564fddb-9cbqg"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.297738 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.300461 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.338976 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af929740-592b-4d7f-9c99-061df6882206-logs\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.339318 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-745tx\" (UniqueName: \"kubernetes.io/projected/3ed51218-5677-4c7a-aeb6-1ec6c215178a-kube-api-access-745tx\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.339428 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-config\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.339569 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653cedf2-2880-49ff-b177-8974b9f0ecdf-logs\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.339687 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-config-data\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.339833 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.339919 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-config-data-custom\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.340000 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-combined-ca-bundle\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.340089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.340175 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbs6g\" (UniqueName: \"kubernetes.io/projected/653cedf2-2880-49ff-b177-8974b9f0ecdf-kube-api-access-mbs6g\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.340491 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-config\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.341006 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342394 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342432 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8rv7\" (UniqueName: \"kubernetes.io/projected/af929740-592b-4d7f-9c99-061df6882206-kube-api-access-f8rv7\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342506 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342577 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342628 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-combined-ca-bundle\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342682 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-config-data-custom\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342753 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-config-data\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.343464 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.343979 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.365175 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56c564fddb-9cbqg"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.375120 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-745tx\" (UniqueName: \"kubernetes.io/projected/3ed51218-5677-4c7a-aeb6-1ec6c215178a-kube-api-access-745tx\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.447513 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653cedf2-2880-49ff-b177-8974b9f0ecdf-logs\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.450290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653cedf2-2880-49ff-b177-8974b9f0ecdf-logs\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.454550 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-config-data\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.458918 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459133 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-config-data-custom\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459265 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-combined-ca-bundle\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459357 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbs6g\" (UniqueName: \"kubernetes.io/projected/653cedf2-2880-49ff-b177-8974b9f0ecdf-kube-api-access-mbs6g\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459458 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zv94\" (UniqueName: \"kubernetes.io/projected/a2288b37-d331-4c7e-b95d-13bb4987eb75-kube-api-access-8zv94\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459570 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8rv7\" (UniqueName: \"kubernetes.io/projected/af929740-592b-4d7f-9c99-061df6882206-kube-api-access-f8rv7\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459696 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-combined-ca-bundle\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459787 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-combined-ca-bundle\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459887 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-config-data-custom\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.460066 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-config-data\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.460186 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af929740-592b-4d7f-9c99-061df6882206-logs\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.460292 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data-custom\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.460547 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2288b37-d331-4c7e-b95d-13bb4987eb75-logs\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.472959 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af929740-592b-4d7f-9c99-061df6882206-logs\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.473676 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.474173 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-config-data-custom\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.483517 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-config-data\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.488008 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-combined-ca-bundle\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.488455 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-config-data-custom\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.489496 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-config-data\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.492300 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-combined-ca-bundle\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.501556 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8rv7\" (UniqueName: \"kubernetes.io/projected/af929740-592b-4d7f-9c99-061df6882206-kube-api-access-f8rv7\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.502352 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbs6g\" (UniqueName: \"kubernetes.io/projected/653cedf2-2880-49ff-b177-8974b9f0ecdf-kube-api-access-mbs6g\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.531258 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.556627 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.562336 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-combined-ca-bundle\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.562444 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data-custom\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.562482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2288b37-d331-4c7e-b95d-13bb4987eb75-logs\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.562555 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.562653 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zv94\" (UniqueName: \"kubernetes.io/projected/a2288b37-d331-4c7e-b95d-13bb4987eb75-kube-api-access-8zv94\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.566416 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2288b37-d331-4c7e-b95d-13bb4987eb75-logs\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.573340 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data-custom\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.573825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.588765 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-combined-ca-bundle\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.598729 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zv94\" (UniqueName: \"kubernetes.io/projected/a2288b37-d331-4c7e-b95d-13bb4987eb75-kube-api-access-8zv94\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: E0130 14:07:16.829147 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.887797 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.039317 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerStarted","Data":"923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576"} Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.039459 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="ceilometer-notification-agent" containerID="cri-o://b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433" gracePeriod=30 Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.039673 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.039760 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="proxy-httpd" containerID="cri-o://923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576" gracePeriod=30 Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.039856 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="sg-core" containerID="cri-o://1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b" gracePeriod=30 Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.052287 4793 generic.go:334] "Generic (PLEG): container finished" podID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerID="dff5cd3a5cfaef3ae4c87e55c3563d4578820a2c23ec2494ebf248940d3816d8" exitCode=1 Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.052340 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerDied","Data":"dff5cd3a5cfaef3ae4c87e55c3563d4578820a2c23ec2494ebf248940d3816d8"} Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.052937 4793 scope.go:117] "RemoveContainer" containerID="dff5cd3a5cfaef3ae4c87e55c3563d4578820a2c23ec2494ebf248940d3816d8" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.069378 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d689db86f-zslsz" event={"ID":"0ed57c3d-4992-4cfa-8655-1587b5897df6","Type":"ContainerStarted","Data":"3f287ac88c96afaae65d350043cfce7455dba0ab3f6639d47bd36b0be7a83d97"} Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.070239 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.073190 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65f95549b8-wtpxl" event={"ID":"57bfc822-1d30-49bc-a077-686b68e9c1e6","Type":"ContainerStarted","Data":"3c4b90e584e671fccfcf606db61676f035f1df60975654e0b13044dc92b71347"} Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.073223 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65f95549b8-wtpxl" event={"ID":"57bfc822-1d30-49bc-a077-686b68e9c1e6","Type":"ContainerStarted","Data":"a86058b646d896fef02aab189293f46ef58626db8f49b0a096ba1a82b0a7e285"} Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.073393 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.073475 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.114135 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-65f95549b8-wtpxl" podStartSLOduration=6.114112861 podStartE2EDuration="6.114112861s" podCreationTimestamp="2026-01-30 14:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:17.108499285 +0000 UTC m=+1447.809847766" watchObservedRunningTime="2026-01-30 14:07:17.114112861 +0000 UTC m=+1447.815461352" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.155741 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vxdfs"] Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.158777 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-d689db86f-zslsz" podStartSLOduration=10.155107844 podStartE2EDuration="10.155107844s" podCreationTimestamp="2026-01-30 14:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:17.130345004 +0000 UTC m=+1447.831693495" watchObservedRunningTime="2026-01-30 14:07:17.155107844 +0000 UTC m=+1447.856456335" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.191166 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6dd7f7f8-htnvl"] Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.348646 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d78d76787-7f5jh"] Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.440636 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56c564fddb-9cbqg"] Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.083941 4793 generic.go:334] "Generic (PLEG): container finished" podID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerID="86521a408e3d25c11a7337fcc940bc0bc142bbff9725007bee5f593d4d4fea8f" exitCode=0 Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.084497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" event={"ID":"3ed51218-5677-4c7a-aeb6-1ec6c215178a","Type":"ContainerDied","Data":"86521a408e3d25c11a7337fcc940bc0bc142bbff9725007bee5f593d4d4fea8f"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.084548 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" event={"ID":"3ed51218-5677-4c7a-aeb6-1ec6c215178a","Type":"ContainerStarted","Data":"30fb4318627919dfef7bd7d37dac82088ae21ede274e001c1e66cb82e9d4e95c"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.086162 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d78d76787-7f5jh" event={"ID":"653cedf2-2880-49ff-b177-8974b9f0ecdf","Type":"ContainerStarted","Data":"155e6aa0821f872713dde4309217a3f9f45836ee063b8a383db90e4c1b729351"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.094876 4793 generic.go:334] "Generic (PLEG): container finished" podID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerID="923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576" exitCode=0 Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.095289 4793 generic.go:334] "Generic (PLEG): container finished" podID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerID="1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b" exitCode=2 Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.094936 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerDied","Data":"923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.095405 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerDied","Data":"1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.110940 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerStarted","Data":"1a0edd78ac934a217d77619cfa86e0fdb058839606603994d0152ae52ba43266"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.123101 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" event={"ID":"af929740-592b-4d7f-9c99-061df6882206","Type":"ContainerStarted","Data":"ce9a2834d75e989b4996cc6e5a702194d98c3aaa7e98470bbd0b9d77db207c67"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.127593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c564fddb-9cbqg" event={"ID":"a2288b37-d331-4c7e-b95d-13bb4987eb75","Type":"ContainerStarted","Data":"782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.127954 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c564fddb-9cbqg" event={"ID":"a2288b37-d331-4c7e-b95d-13bb4987eb75","Type":"ContainerStarted","Data":"f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.128107 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.129215 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c564fddb-9cbqg" event={"ID":"a2288b37-d331-4c7e-b95d-13bb4987eb75","Type":"ContainerStarted","Data":"f97b2202fc16d2a3c18bd1abd87cac5c90aa96890b8132c11e4c4e9fbac70a09"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.129370 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.183182 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-56c564fddb-9cbqg" podStartSLOduration=2.183158873 podStartE2EDuration="2.183158873s" podCreationTimestamp="2026-01-30 14:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:18.16155732 +0000 UTC m=+1448.862905811" watchObservedRunningTime="2026-01-30 14:07:18.183158873 +0000 UTC m=+1448.884507364" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.138625 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" event={"ID":"3ed51218-5677-4c7a-aeb6-1ec6c215178a","Type":"ContainerStarted","Data":"bb31cb678cc7c2b077ba027ae624b678852c055b20b84f1ef0bb6524f80ba78a"} Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.139196 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.164878 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" podStartSLOduration=3.164859349 podStartE2EDuration="3.164859349s" podCreationTimestamp="2026-01-30 14:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:19.161294174 +0000 UTC m=+1449.862642685" watchObservedRunningTime="2026-01-30 14:07:19.164859349 +0000 UTC m=+1449.866207840" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.165401 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-577797dd7d-dhrt2"] Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.169622 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.174892 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.175124 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.204743 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-577797dd7d-dhrt2"] Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.322966 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-public-tls-certs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.323033 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-config-data-custom\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.323081 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a389d76c-e0de-4b8d-84b2-82aedd050f7f-logs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.323195 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ngb\" (UniqueName: \"kubernetes.io/projected/a389d76c-e0de-4b8d-84b2-82aedd050f7f-kube-api-access-22ngb\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.323223 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-internal-tls-certs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.323476 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-combined-ca-bundle\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.323595 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-config-data\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425192 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22ngb\" (UniqueName: \"kubernetes.io/projected/a389d76c-e0de-4b8d-84b2-82aedd050f7f-kube-api-access-22ngb\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425237 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-internal-tls-certs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425298 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-combined-ca-bundle\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425336 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-config-data\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425399 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-public-tls-certs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425430 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-config-data-custom\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425460 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a389d76c-e0de-4b8d-84b2-82aedd050f7f-logs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425926 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a389d76c-e0de-4b8d-84b2-82aedd050f7f-logs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.430684 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-combined-ca-bundle\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.433024 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-config-data-custom\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.435448 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-config-data\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.444979 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-internal-tls-certs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.448616 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-public-tls-certs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.448952 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22ngb\" (UniqueName: \"kubernetes.io/projected/a389d76c-e0de-4b8d-84b2-82aedd050f7f-kube-api-access-22ngb\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.489225 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.609160 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.609467 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:07:20 crc kubenswrapper[4793]: I0130 14:07:20.189150 4793 generic.go:334] "Generic (PLEG): container finished" podID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" containerID="ae10414b3d00dc4ceb2bc58d35069ffd261cdc4f3583eb5ebdf5decfcf70c2e6" exitCode=0 Jan 30 14:07:20 crc kubenswrapper[4793]: I0130 14:07:20.189430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4rknj" event={"ID":"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd","Type":"ContainerDied","Data":"ae10414b3d00dc4ceb2bc58d35069ffd261cdc4f3583eb5ebdf5decfcf70c2e6"} Jan 30 14:07:20 crc kubenswrapper[4793]: I0130 14:07:20.542570 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-577797dd7d-dhrt2"] Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.227580 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d78d76787-7f5jh" event={"ID":"653cedf2-2880-49ff-b177-8974b9f0ecdf","Type":"ContainerStarted","Data":"643273086e560dec2921a2eb77b5c8efe71ddf9a8874e5a6ad6314a55c5f83f0"} Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.227862 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d78d76787-7f5jh" event={"ID":"653cedf2-2880-49ff-b177-8974b9f0ecdf","Type":"ContainerStarted","Data":"af17714dc1df2fa0408cdff26094746855f718a72e8fe0e97b5bbadd0c07079f"} Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.260659 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-d78d76787-7f5jh" podStartSLOduration=2.650590823 podStartE2EDuration="5.260640749s" podCreationTimestamp="2026-01-30 14:07:16 +0000 UTC" firstStartedPulling="2026-01-30 14:07:17.359226217 +0000 UTC m=+1448.060574708" lastFinishedPulling="2026-01-30 14:07:19.969276143 +0000 UTC m=+1450.670624634" observedRunningTime="2026-01-30 14:07:21.252178874 +0000 UTC m=+1451.953527385" watchObservedRunningTime="2026-01-30 14:07:21.260640749 +0000 UTC m=+1451.961989240" Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.268075 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" event={"ID":"af929740-592b-4d7f-9c99-061df6882206","Type":"ContainerStarted","Data":"276f2bcfcdbb4034f2621c20b42b288cddfcf0dd4a8ef08b418899b719afa302"} Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.268130 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" event={"ID":"af929740-592b-4d7f-9c99-061df6882206","Type":"ContainerStarted","Data":"45f7aaca0a0ff8cfe6b883f5492be3d588aeee2190f8dec902ac7c3ad113e7ff"} Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.274092 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577797dd7d-dhrt2" event={"ID":"a389d76c-e0de-4b8d-84b2-82aedd050f7f","Type":"ContainerStarted","Data":"24f1ed1b5b88989a2fa39b7d9f9de2db99c0b16b303f2f6c39656e86d4d89733"} Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.274140 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577797dd7d-dhrt2" event={"ID":"a389d76c-e0de-4b8d-84b2-82aedd050f7f","Type":"ContainerStarted","Data":"57b2c625731c3f35fca926d279e41c4247e77e8a5eddb40633ef7d98003c5cd1"} Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.310579 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" podStartSLOduration=2.548281377 podStartE2EDuration="5.310552398s" podCreationTimestamp="2026-01-30 14:07:16 +0000 UTC" firstStartedPulling="2026-01-30 14:07:17.21358464 +0000 UTC m=+1447.914933131" lastFinishedPulling="2026-01-30 14:07:19.975855661 +0000 UTC m=+1450.677204152" observedRunningTime="2026-01-30 14:07:21.289865197 +0000 UTC m=+1451.991213698" watchObservedRunningTime="2026-01-30 14:07:21.310552398 +0000 UTC m=+1452.011900889" Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.834620 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4rknj" Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.981754 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-etc-machine-id\") pod \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.981828 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-scripts\") pod \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.981884 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" (UID: "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.981910 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-config-data\") pod \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.981991 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-db-sync-config-data\") pod \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.982078 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-combined-ca-bundle\") pod \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.982116 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkv5g\" (UniqueName: \"kubernetes.io/projected/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-kube-api-access-gkv5g\") pod \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.983063 4793 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.002085 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-kube-api-access-gkv5g" (OuterVolumeSpecName: "kube-api-access-gkv5g") pod "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" (UID: "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd"). InnerVolumeSpecName "kube-api-access-gkv5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.012231 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-scripts" (OuterVolumeSpecName: "scripts") pod "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" (UID: "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.013199 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" (UID: "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.034960 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" (UID: "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.085615 4793 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.085847 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.085932 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkv5g\" (UniqueName: \"kubernetes.io/projected/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-kube-api-access-gkv5g\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.086026 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.133198 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-config-data" (OuterVolumeSpecName: "config-data") pod "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" (UID: "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.188245 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.282243 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4rknj" event={"ID":"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd","Type":"ContainerDied","Data":"6d4763986d1b4a11b99da97ae431575d2b3082d3a2bdcdbedb9c248948af623d"} Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.282279 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d4763986d1b4a11b99da97ae431575d2b3082d3a2bdcdbedb9c248948af623d" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.282332 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4rknj" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.292835 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577797dd7d-dhrt2" event={"ID":"a389d76c-e0de-4b8d-84b2-82aedd050f7f","Type":"ContainerStarted","Data":"cb375cd077935993ece603f76e3e2a78c761c0d3002d3112c9452fbd5054cbcd"} Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.320156 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-577797dd7d-dhrt2" podStartSLOduration=3.32013359 podStartE2EDuration="3.32013359s" podCreationTimestamp="2026-01-30 14:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:22.31564884 +0000 UTC m=+1453.016997331" watchObservedRunningTime="2026-01-30 14:07:22.32013359 +0000 UTC m=+1453.021482081" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.491178 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:22 crc kubenswrapper[4793]: E0130 14:07:22.491521 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" containerName="cinder-db-sync" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.491534 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" containerName="cinder-db-sync" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.491742 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" containerName="cinder-db-sync" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.492627 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.510852 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.511144 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5kb4p" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.511372 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.512216 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.520650 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.611351 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvr52\" (UniqueName: \"kubernetes.io/projected/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-kube-api-access-bvr52\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.611437 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.611465 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.611494 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.611557 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.611581 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.612836 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vxdfs"] Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.613090 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerName="dnsmasq-dns" containerID="cri-o://bb31cb678cc7c2b077ba027ae624b678852c055b20b84f1ef0bb6524f80ba78a" gracePeriod=10 Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.644838 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-jsbkl"] Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.646440 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.693129 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-jsbkl"] Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.715120 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.715320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.715414 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.715553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvr52\" (UniqueName: \"kubernetes.io/projected/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-kube-api-access-bvr52\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.715672 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.715753 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.718772 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.727953 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.728298 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.730796 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.738450 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.758527 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvr52\" (UniqueName: \"kubernetes.io/projected/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-kube-api-access-bvr52\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.791780 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.793754 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.803760 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.816789 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.816836 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.816868 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.820040 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrw8b\" (UniqueName: \"kubernetes.io/projected/2e12fa14-c592-4e14-8e7a-c02ee84cec72-kube-api-access-hrw8b\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.820197 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.820235 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-config\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.824633 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.849232 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925101 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925157 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-scripts\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925192 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925230 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-config\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925296 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925329 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925393 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data-custom\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925464 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrw8b\" (UniqueName: \"kubernetes.io/projected/2e12fa14-c592-4e14-8e7a-c02ee84cec72-kube-api-access-hrw8b\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925568 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nddvt\" (UniqueName: \"kubernetes.io/projected/97106034-e262-47a4-ae89-2bf1e9aa354f-kube-api-access-nddvt\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925598 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97106034-e262-47a4-ae89-2bf1e9aa354f-logs\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925632 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97106034-e262-47a4-ae89-2bf1e9aa354f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925663 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.926134 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.926658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-config\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.926852 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.927766 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.929885 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.945889 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrw8b\" (UniqueName: \"kubernetes.io/projected/2e12fa14-c592-4e14-8e7a-c02ee84cec72-kube-api-access-hrw8b\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.028682 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddvt\" (UniqueName: \"kubernetes.io/projected/97106034-e262-47a4-ae89-2bf1e9aa354f-kube-api-access-nddvt\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.029021 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97106034-e262-47a4-ae89-2bf1e9aa354f-logs\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.029091 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97106034-e262-47a4-ae89-2bf1e9aa354f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.029127 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.029187 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-scripts\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.029216 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.029321 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data-custom\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.037359 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97106034-e262-47a4-ae89-2bf1e9aa354f-logs\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.037425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97106034-e262-47a4-ae89-2bf1e9aa354f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.038210 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.039634 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-scripts\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.044229 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.046009 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data-custom\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.058459 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nddvt\" (UniqueName: \"kubernetes.io/projected/97106034-e262-47a4-ae89-2bf1e9aa354f-kube-api-access-nddvt\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.127991 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.177467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.338729 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.340372 4793 generic.go:334] "Generic (PLEG): container finished" podID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerID="bb31cb678cc7c2b077ba027ae624b678852c055b20b84f1ef0bb6524f80ba78a" exitCode=0 Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.340449 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" event={"ID":"3ed51218-5677-4c7a-aeb6-1ec6c215178a","Type":"ContainerDied","Data":"bb31cb678cc7c2b077ba027ae624b678852c055b20b84f1ef0bb6524f80ba78a"} Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.402678 4793 generic.go:334] "Generic (PLEG): container finished" podID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerID="b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433" exitCode=0 Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.402970 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.403009 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerDied","Data":"b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433"} Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.403041 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerDied","Data":"50cb694f90f1d6a53f515af750afb638a61a81c6b156cbc3d6081c5686d9e08c"} Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.403096 4793 scope.go:117] "RemoveContainer" containerID="923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.403672 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.403837 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.475433 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-combined-ca-bundle\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.484089 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-run-httpd\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.485719 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-scripts\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.487304 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-log-httpd\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.488105 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sld6q\" (UniqueName: \"kubernetes.io/projected/f85d7b0d-5452-4175-842b-7d1505eb82e0-kube-api-access-sld6q\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.485573 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.488620 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.490217 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-sg-core-conf-yaml\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.494689 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-config-data\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.506075 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.506310 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.521437 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-scripts" (OuterVolumeSpecName: "scripts") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.559987 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85d7b0d-5452-4175-842b-7d1505eb82e0-kube-api-access-sld6q" (OuterVolumeSpecName: "kube-api-access-sld6q") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "kube-api-access-sld6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.577937 4793 scope.go:117] "RemoveContainer" containerID="1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.608071 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.608104 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sld6q\" (UniqueName: \"kubernetes.io/projected/f85d7b0d-5452-4175-842b-7d1505eb82e0-kube-api-access-sld6q\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.713790 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.722593 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.737386 4793 scope.go:117] "RemoveContainer" containerID="b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.739372 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.741304 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.750332 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.825136 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.829236 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-config-data" (OuterVolumeSpecName: "config-data") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.853981 4793 scope.go:117] "RemoveContainer" containerID="923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.854752 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: E0130 14:07:23.855034 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576\": container with ID starting with 923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576 not found: ID does not exist" containerID="923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.855075 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576"} err="failed to get container status \"923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576\": rpc error: code = NotFound desc = could not find container \"923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576\": container with ID starting with 923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576 not found: ID does not exist" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.855095 4793 scope.go:117] "RemoveContainer" containerID="1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b" Jan 30 14:07:23 crc kubenswrapper[4793]: E0130 14:07:23.856460 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b\": container with ID starting with 1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b not found: ID does not exist" containerID="1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.856483 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b"} err="failed to get container status \"1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b\": rpc error: code = NotFound desc = could not find container \"1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b\": container with ID starting with 1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b not found: ID does not exist" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.856499 4793 scope.go:117] "RemoveContainer" containerID="b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433" Jan 30 14:07:23 crc kubenswrapper[4793]: E0130 14:07:23.862923 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433\": container with ID starting with b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433 not found: ID does not exist" containerID="b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.862955 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433"} err="failed to get container status \"b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433\": rpc error: code = NotFound desc = could not find container \"b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433\": container with ID starting with b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433 not found: ID does not exist" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.955668 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-745tx\" (UniqueName: \"kubernetes.io/projected/3ed51218-5677-4c7a-aeb6-1ec6c215178a-kube-api-access-745tx\") pod \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.955809 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-svc\") pod \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.955870 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-swift-storage-0\") pod \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.955900 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-config\") pod \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.955941 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-sb\") pod \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.955974 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-nb\") pod \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.984033 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed51218-5677-4c7a-aeb6-1ec6c215178a-kube-api-access-745tx" (OuterVolumeSpecName: "kube-api-access-745tx") pod "3ed51218-5677-4c7a-aeb6-1ec6c215178a" (UID: "3ed51218-5677-4c7a-aeb6-1ec6c215178a"). InnerVolumeSpecName "kube-api-access-745tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.079684 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-745tx\" (UniqueName: \"kubernetes.io/projected/3ed51218-5677-4c7a-aeb6-1ec6c215178a-kube-api-access-745tx\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.089498 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-jsbkl"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.142855 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3ed51218-5677-4c7a-aeb6-1ec6c215178a" (UID: "3ed51218-5677-4c7a-aeb6-1ec6c215178a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.147499 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.153215 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ed51218-5677-4c7a-aeb6-1ec6c215178a" (UID: "3ed51218-5677-4c7a-aeb6-1ec6c215178a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.174140 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.184977 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.185014 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.188706 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:07:24 crc kubenswrapper[4793]: E0130 14:07:24.189137 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="sg-core" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189155 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="sg-core" Jan 30 14:07:24 crc kubenswrapper[4793]: E0130 14:07:24.189165 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerName="dnsmasq-dns" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189171 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerName="dnsmasq-dns" Jan 30 14:07:24 crc kubenswrapper[4793]: E0130 14:07:24.189186 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="ceilometer-notification-agent" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189193 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="ceilometer-notification-agent" Jan 30 14:07:24 crc kubenswrapper[4793]: E0130 14:07:24.189204 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="proxy-httpd" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189209 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="proxy-httpd" Jan 30 14:07:24 crc kubenswrapper[4793]: E0130 14:07:24.189219 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerName="init" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189224 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerName="init" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189391 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerName="dnsmasq-dns" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189406 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="ceilometer-notification-agent" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189414 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="sg-core" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189426 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="proxy-httpd" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.191143 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.192023 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ed51218-5677-4c7a-aeb6-1ec6c215178a" (UID: "3ed51218-5677-4c7a-aeb6-1ec6c215178a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.194274 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.194350 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.203131 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.228166 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.232503 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-config" (OuterVolumeSpecName: "config") pod "3ed51218-5677-4c7a-aeb6-1ec6c215178a" (UID: "3ed51218-5677-4c7a-aeb6-1ec6c215178a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.286311 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.286378 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-log-httpd\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.286407 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-config-data\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.286438 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-scripts\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.289311 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ed51218-5677-4c7a-aeb6-1ec6c215178a" (UID: "3ed51218-5677-4c7a-aeb6-1ec6c215178a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.289427 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hlzq\" (UniqueName: \"kubernetes.io/projected/45c782cb-cc45-4785-bdff-d6d9e30389e8-kube-api-access-5hlzq\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.289464 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.289483 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-run-httpd\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.289983 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.290203 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.290374 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.394665 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-log-httpd\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.394720 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-config-data\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.395280 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-log-httpd\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.395357 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-scripts\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.395732 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hlzq\" (UniqueName: \"kubernetes.io/projected/45c782cb-cc45-4785-bdff-d6d9e30389e8-kube-api-access-5hlzq\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.395778 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.395805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-run-httpd\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.395977 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.398894 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-run-httpd\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.402732 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-config-data\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.406844 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.407061 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.408007 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-scripts\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.418861 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hlzq\" (UniqueName: \"kubernetes.io/projected/45c782cb-cc45-4785-bdff-d6d9e30389e8-kube-api-access-5hlzq\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.424826 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" path="/var/lib/kubelet/pods/f85d7b0d-5452-4175-842b-7d1505eb82e0/volumes" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.454468 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97106034-e262-47a4-ae89-2bf1e9aa354f","Type":"ContainerStarted","Data":"75a99447618824a28826d92bf0cd6be6c9e8089ca3fa2987920905ca99000ff1"} Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.455785 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" event={"ID":"2e12fa14-c592-4e14-8e7a-c02ee84cec72","Type":"ContainerStarted","Data":"dea9c67f4ab17b561d40848ccf607759778f130142a4dfee52cb6203cfd164a1"} Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.458190 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8","Type":"ContainerStarted","Data":"159c1470b0ba252efe02d67b50c8e7273c57baeaea595257f321b0b7be1d2fd8"} Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.462273 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.464111 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" event={"ID":"3ed51218-5677-4c7a-aeb6-1ec6c215178a","Type":"ContainerDied","Data":"30fb4318627919dfef7bd7d37dac82088ae21ede274e001c1e66cb82e9d4e95c"} Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.464165 4793 scope.go:117] "RemoveContainer" containerID="bb31cb678cc7c2b077ba027ae624b678852c055b20b84f1ef0bb6524f80ba78a" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.531644 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.596333 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vxdfs"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.614545 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vxdfs"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.835208 4793 scope.go:117] "RemoveContainer" containerID="86521a408e3d25c11a7337fcc940bc0bc142bbff9725007bee5f593d4d4fea8f" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.840300 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.840371 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.841110 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"f596f8243d020ebc541370451531edeb9f8ca985e2b5b436a6b072092db3b9f8"} pod="openstack/horizon-5b9fc5f8f6-nj7xv" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.841141 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" containerID="cri-o://f596f8243d020ebc541370451531edeb9f8ca985e2b5b436a6b072092db3b9f8" gracePeriod=30 Jan 30 14:07:25 crc kubenswrapper[4793]: I0130 14:07:25.500217 4793 generic.go:334] "Generic (PLEG): container finished" podID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerID="a550c028a717096d5e1912e30909f7370216f5f1ecf7d5091df70cd1de2ebf87" exitCode=0 Jan 30 14:07:25 crc kubenswrapper[4793]: I0130 14:07:25.500718 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" event={"ID":"2e12fa14-c592-4e14-8e7a-c02ee84cec72","Type":"ContainerDied","Data":"a550c028a717096d5e1912e30909f7370216f5f1ecf7d5091df70cd1de2ebf87"} Jan 30 14:07:25 crc kubenswrapper[4793]: I0130 14:07:25.666522 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:07:25 crc kubenswrapper[4793]: I0130 14:07:25.957378 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:26 crc kubenswrapper[4793]: I0130 14:07:26.411822 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" path="/var/lib/kubelet/pods/3ed51218-5677-4c7a-aeb6-1ec6c215178a/volumes" Jan 30 14:07:26 crc kubenswrapper[4793]: I0130 14:07:26.536488 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97106034-e262-47a4-ae89-2bf1e9aa354f","Type":"ContainerStarted","Data":"cbb9d373808ddc3a679132eab05b6ce25d5690657dca1f20d2fe727cd935b4fe"} Jan 30 14:07:26 crc kubenswrapper[4793]: I0130 14:07:26.538615 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" event={"ID":"2e12fa14-c592-4e14-8e7a-c02ee84cec72","Type":"ContainerStarted","Data":"4e43c7a23f4a490f4a7852a2f22ad1652b89482999fbd5408077c27f4ed89f64"} Jan 30 14:07:26 crc kubenswrapper[4793]: I0130 14:07:26.539707 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:26 crc kubenswrapper[4793]: I0130 14:07:26.541506 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerStarted","Data":"d21421b35db87347d4a7181c28d855890a9a721d97cf5be20f5f36330a91c466"} Jan 30 14:07:26 crc kubenswrapper[4793]: I0130 14:07:26.574365 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" podStartSLOduration=4.574342875 podStartE2EDuration="4.574342875s" podCreationTimestamp="2026-01-30 14:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:26.562280933 +0000 UTC m=+1457.263629434" watchObservedRunningTime="2026-01-30 14:07:26.574342875 +0000 UTC m=+1457.275691366" Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.527911 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-56c564fddb-9cbqg" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.578401 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97106034-e262-47a4-ae89-2bf1e9aa354f","Type":"ContainerStarted","Data":"bf72d5828d72d09872e6bebaabe95465abe1d8ff3c5a7138290d16c256939ff5"} Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.578561 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api-log" containerID="cri-o://cbb9d373808ddc3a679132eab05b6ce25d5690657dca1f20d2fe727cd935b4fe" gracePeriod=30 Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.578784 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.579024 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api" containerID="cri-o://bf72d5828d72d09872e6bebaabe95465abe1d8ff3c5a7138290d16c256939ff5" gracePeriod=30 Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.583394 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8","Type":"ContainerStarted","Data":"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4"} Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.583430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8","Type":"ContainerStarted","Data":"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156"} Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.615949 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.615929452 podStartE2EDuration="5.615929452s" podCreationTimestamp="2026-01-30 14:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:27.603220695 +0000 UTC m=+1458.304569196" watchObservedRunningTime="2026-01-30 14:07:27.615929452 +0000 UTC m=+1458.317277943" Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.634382 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.263232931 podStartE2EDuration="5.634361479s" podCreationTimestamp="2026-01-30 14:07:22 +0000 UTC" firstStartedPulling="2026-01-30 14:07:23.888487225 +0000 UTC m=+1454.589835716" lastFinishedPulling="2026-01-30 14:07:25.259615773 +0000 UTC m=+1455.960964264" observedRunningTime="2026-01-30 14:07:27.624388167 +0000 UTC m=+1458.325736658" watchObservedRunningTime="2026-01-30 14:07:27.634361479 +0000 UTC m=+1458.335709970" Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.842575 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.624701 4793 generic.go:334] "Generic (PLEG): container finished" podID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerID="bf72d5828d72d09872e6bebaabe95465abe1d8ff3c5a7138290d16c256939ff5" exitCode=0 Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.627350 4793 generic.go:334] "Generic (PLEG): container finished" podID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerID="cbb9d373808ddc3a679132eab05b6ce25d5690657dca1f20d2fe727cd935b4fe" exitCode=143 Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.624954 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97106034-e262-47a4-ae89-2bf1e9aa354f","Type":"ContainerDied","Data":"bf72d5828d72d09872e6bebaabe95465abe1d8ff3c5a7138290d16c256939ff5"} Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.627645 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97106034-e262-47a4-ae89-2bf1e9aa354f","Type":"ContainerDied","Data":"cbb9d373808ddc3a679132eab05b6ce25d5690657dca1f20d2fe727cd935b4fe"} Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.630555 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerStarted","Data":"0f0a92b67bf2c57b29668defe80c5ef06174933a3389b63d549a0beeb9490672"} Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.767546 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.845725 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97106034-e262-47a4-ae89-2bf1e9aa354f-logs\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.845804 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.845928 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97106034-e262-47a4-ae89-2bf1e9aa354f-etc-machine-id\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.846017 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97106034-e262-47a4-ae89-2bf1e9aa354f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.846071 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nddvt\" (UniqueName: \"kubernetes.io/projected/97106034-e262-47a4-ae89-2bf1e9aa354f-kube-api-access-nddvt\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.846116 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-scripts\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.846172 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97106034-e262-47a4-ae89-2bf1e9aa354f-logs" (OuterVolumeSpecName: "logs") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.846156 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data-custom\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.846849 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-combined-ca-bundle\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.847655 4793 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97106034-e262-47a4-ae89-2bf1e9aa354f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.847675 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97106034-e262-47a4-ae89-2bf1e9aa354f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.855703 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-scripts" (OuterVolumeSpecName: "scripts") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.868211 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97106034-e262-47a4-ae89-2bf1e9aa354f-kube-api-access-nddvt" (OuterVolumeSpecName: "kube-api-access-nddvt") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "kube-api-access-nddvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.871962 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.899218 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.952277 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nddvt\" (UniqueName: \"kubernetes.io/projected/97106034-e262-47a4-ae89-2bf1e9aa354f-kube-api-access-nddvt\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.952316 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.952333 4793 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.952343 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.959530 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data" (OuterVolumeSpecName: "config-data") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.053770 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.426767 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.618929 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.146:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8080: connect: connection refused" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.619309 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.620121 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"1a0edd78ac934a217d77619cfa86e0fdb058839606603994d0152ae52ba43266"} pod="openstack/horizon-6b66cd9fcf-c94kp" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.620174 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" containerID="cri-o://1a0edd78ac934a217d77619cfa86e0fdb058839606603994d0152ae52ba43266" gracePeriod=30 Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.720671 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97106034-e262-47a4-ae89-2bf1e9aa354f","Type":"ContainerDied","Data":"75a99447618824a28826d92bf0cd6be6c9e8089ca3fa2987920905ca99000ff1"} Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.720740 4793 scope.go:117] "RemoveContainer" containerID="bf72d5828d72d09872e6bebaabe95465abe1d8ff3c5a7138290d16c256939ff5" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.720911 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.751256 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerStarted","Data":"4199787f9fba9bfc02645d135d0bde12d6b02a89d6508f5d6cbf72ca7396c3a8"} Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.751301 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerStarted","Data":"1538087d2c16a6a8f0cfb34ccb93511ff0ccd4bdfcfc4ccc0a63b77916661e9e"} Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.757162 4793 scope.go:117] "RemoveContainer" containerID="cbb9d373808ddc3a679132eab05b6ce25d5690657dca1f20d2fe727cd935b4fe" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.768262 4793 generic.go:334] "Generic (PLEG): container finished" podID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerID="f596f8243d020ebc541370451531edeb9f8ca985e2b5b436a6b072092db3b9f8" exitCode=0 Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.769599 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerDied","Data":"f596f8243d020ebc541370451531edeb9f8ca985e2b5b436a6b072092db3b9f8"} Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.787109 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.800874 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.817034 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:29 crc kubenswrapper[4793]: E0130 14:07:29.817419 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api-log" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.817435 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api-log" Jan 30 14:07:29 crc kubenswrapper[4793]: E0130 14:07:29.817449 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.817455 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.817621 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api-log" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.817651 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.818533 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.823322 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.823483 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.823512 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.837609 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.837659 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.853101 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868278 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcn8k\" (UniqueName: \"kubernetes.io/projected/3105dc9e-c178-4799-a658-044d4d9b8312-kube-api-access-xcn8k\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868333 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868380 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3105dc9e-c178-4799-a658-044d4d9b8312-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868395 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868446 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-scripts\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868472 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-config-data-custom\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868542 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3105dc9e-c178-4799-a658-044d4d9b8312-logs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868563 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-config-data\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868613 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.969961 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-scripts\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-config-data-custom\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970038 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3105dc9e-c178-4799-a658-044d4d9b8312-logs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970079 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-config-data\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970115 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970175 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcn8k\" (UniqueName: \"kubernetes.io/projected/3105dc9e-c178-4799-a658-044d4d9b8312-kube-api-access-xcn8k\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970218 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970261 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3105dc9e-c178-4799-a658-044d4d9b8312-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970281 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.973851 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3105dc9e-c178-4799-a658-044d4d9b8312-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.976868 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3105dc9e-c178-4799-a658-044d4d9b8312-logs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.978992 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-config-data-custom\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.979521 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.982548 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-scripts\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.984623 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.986413 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.987255 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-config-data\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:30 crc kubenswrapper[4793]: I0130 14:07:30.003689 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcn8k\" (UniqueName: \"kubernetes.io/projected/3105dc9e-c178-4799-a658-044d4d9b8312-kube-api-access-xcn8k\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:30 crc kubenswrapper[4793]: I0130 14:07:30.150764 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 14:07:30 crc kubenswrapper[4793]: I0130 14:07:30.414389 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" path="/var/lib/kubelet/pods/97106034-e262-47a4-ae89-2bf1e9aa354f/volumes" Jan 30 14:07:30 crc kubenswrapper[4793]: I0130 14:07:30.484275 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:30 crc kubenswrapper[4793]: I0130 14:07:30.773892 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:30 crc kubenswrapper[4793]: I0130 14:07:30.783497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerStarted","Data":"640bbc01e45a92a5825f900300d9f0b8086fc19b1ea387177e59aeb60ff48a32"} Jan 30 14:07:31 crc kubenswrapper[4793]: I0130 14:07:31.840386 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3105dc9e-c178-4799-a658-044d4d9b8312","Type":"ContainerStarted","Data":"cd40f95368411b7b7624f6cefa1037a51682f45dcdf5aa9cdc5fd4b2cbe3b9b8"} Jan 30 14:07:31 crc kubenswrapper[4793]: I0130 14:07:31.840704 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3105dc9e-c178-4799-a658-044d4d9b8312","Type":"ContainerStarted","Data":"0ac28b1a3e02c47c2f66643e29bbde6de1d8f2d98e53eee6f58248806331ad3b"} Jan 30 14:07:32 crc kubenswrapper[4793]: I0130 14:07:32.852023 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerStarted","Data":"6314864eaec40aa342c30cbdd74ccf5a6317bae25e0440cf92e8eb60bfb0deb4"} Jan 30 14:07:32 crc kubenswrapper[4793]: I0130 14:07:32.853160 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:07:32 crc kubenswrapper[4793]: I0130 14:07:32.854292 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3105dc9e-c178-4799-a658-044d4d9b8312","Type":"ContainerStarted","Data":"145de7c0116031ea1a2a271f310eb429f2ca5d3d0cd2a37fed800d5cde00f3ce"} Jan 30 14:07:32 crc kubenswrapper[4793]: I0130 14:07:32.854489 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 14:07:32 crc kubenswrapper[4793]: I0130 14:07:32.899091 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.076168668 podStartE2EDuration="8.899074338s" podCreationTimestamp="2026-01-30 14:07:24 +0000 UTC" firstStartedPulling="2026-01-30 14:07:25.674222624 +0000 UTC m=+1456.375571125" lastFinishedPulling="2026-01-30 14:07:31.497128304 +0000 UTC m=+1462.198476795" observedRunningTime="2026-01-30 14:07:32.884295301 +0000 UTC m=+1463.585643802" watchObservedRunningTime="2026-01-30 14:07:32.899074338 +0000 UTC m=+1463.600422829" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.130142 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.151832 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.15181619 podStartE2EDuration="4.15181619s" podCreationTimestamp="2026-01-30 14:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:32.940916622 +0000 UTC m=+1463.642265113" watchObservedRunningTime="2026-01-30 14:07:33.15181619 +0000 UTC m=+1463.853164681" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.190582 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zbt8c"] Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.190808 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="dnsmasq-dns" containerID="cri-o://43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630" gracePeriod=10 Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.404392 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.416986 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.617458 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.668632 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.790821 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.791728 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.868511 4793 generic.go:334] "Generic (PLEG): container finished" podID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerID="43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630" exitCode=0 Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.869701 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.870133 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="cinder-scheduler" containerID="cri-o://7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156" gracePeriod=30 Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.870510 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="probe" containerID="cri-o://8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4" gracePeriod=30 Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.870524 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" event={"ID":"b318d131-c8b9-41a5-a500-f8a9405e0074","Type":"ContainerDied","Data":"43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630"} Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.870669 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" event={"ID":"b318d131-c8b9-41a5-a500-f8a9405e0074","Type":"ContainerDied","Data":"de747f3964ebf14001721dc6443bbc5eded45594ed34eae45ced08a6517ebd85"} Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.870694 4793 scope.go:117] "RemoveContainer" containerID="43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.878523 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56c564fddb-9cbqg"] Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.878740 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56c564fddb-9cbqg" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api-log" containerID="cri-o://f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc" gracePeriod=30 Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.878818 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56c564fddb-9cbqg" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api" containerID="cri-o://782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6" gracePeriod=30 Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.915094 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-swift-storage-0\") pod \"b318d131-c8b9-41a5-a500-f8a9405e0074\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.915179 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-sb\") pod \"b318d131-c8b9-41a5-a500-f8a9405e0074\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.915356 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-nb\") pod \"b318d131-c8b9-41a5-a500-f8a9405e0074\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.915494 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ptwm\" (UniqueName: \"kubernetes.io/projected/b318d131-c8b9-41a5-a500-f8a9405e0074-kube-api-access-6ptwm\") pod \"b318d131-c8b9-41a5-a500-f8a9405e0074\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.915525 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-svc\") pod \"b318d131-c8b9-41a5-a500-f8a9405e0074\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.915603 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-config\") pod \"b318d131-c8b9-41a5-a500-f8a9405e0074\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.925261 4793 scope.go:117] "RemoveContainer" containerID="8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.960787 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b318d131-c8b9-41a5-a500-f8a9405e0074-kube-api-access-6ptwm" (OuterVolumeSpecName: "kube-api-access-6ptwm") pod "b318d131-c8b9-41a5-a500-f8a9405e0074" (UID: "b318d131-c8b9-41a5-a500-f8a9405e0074"). InnerVolumeSpecName "kube-api-access-6ptwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.024851 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ptwm\" (UniqueName: \"kubernetes.io/projected/b318d131-c8b9-41a5-a500-f8a9405e0074-kube-api-access-6ptwm\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.074073 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b318d131-c8b9-41a5-a500-f8a9405e0074" (UID: "b318d131-c8b9-41a5-a500-f8a9405e0074"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.083295 4793 scope.go:117] "RemoveContainer" containerID="43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630" Jan 30 14:07:34 crc kubenswrapper[4793]: E0130 14:07:34.085562 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630\": container with ID starting with 43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630 not found: ID does not exist" containerID="43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.085599 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630"} err="failed to get container status \"43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630\": rpc error: code = NotFound desc = could not find container \"43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630\": container with ID starting with 43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630 not found: ID does not exist" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.085619 4793 scope.go:117] "RemoveContainer" containerID="8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d" Jan 30 14:07:34 crc kubenswrapper[4793]: E0130 14:07:34.085971 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d\": container with ID starting with 8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d not found: ID does not exist" containerID="8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.086023 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d"} err="failed to get container status \"8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d\": rpc error: code = NotFound desc = could not find container \"8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d\": container with ID starting with 8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d not found: ID does not exist" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.105149 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b318d131-c8b9-41a5-a500-f8a9405e0074" (UID: "b318d131-c8b9-41a5-a500-f8a9405e0074"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.118721 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b318d131-c8b9-41a5-a500-f8a9405e0074" (UID: "b318d131-c8b9-41a5-a500-f8a9405e0074"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.126335 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.126368 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.126378 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.129187 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b318d131-c8b9-41a5-a500-f8a9405e0074" (UID: "b318d131-c8b9-41a5-a500-f8a9405e0074"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.134548 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-config" (OuterVolumeSpecName: "config") pod "b318d131-c8b9-41a5-a500-f8a9405e0074" (UID: "b318d131-c8b9-41a5-a500-f8a9405e0074"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.242905 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.242978 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.272303 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zbt8c"] Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.318504 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zbt8c"] Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.408292 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" path="/var/lib/kubelet/pods/b318d131-c8b9-41a5-a500-f8a9405e0074/volumes" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.890458 4793 generic.go:334] "Generic (PLEG): container finished" podID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerID="f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc" exitCode=143 Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.890505 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c564fddb-9cbqg" event={"ID":"a2288b37-d331-4c7e-b95d-13bb4987eb75","Type":"ContainerDied","Data":"f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc"} Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.305016 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.363210 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvr52\" (UniqueName: \"kubernetes.io/projected/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-kube-api-access-bvr52\") pod \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.363403 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data\") pod \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.363998 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-combined-ca-bundle\") pod \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.364035 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data-custom\") pod \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.364071 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-etc-machine-id\") pod \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.364118 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-scripts\") pod \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.364634 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" (UID: "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.366201 4793 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.372430 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-kube-api-access-bvr52" (OuterVolumeSpecName: "kube-api-access-bvr52") pod "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" (UID: "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8"). InnerVolumeSpecName "kube-api-access-bvr52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.373088 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-scripts" (OuterVolumeSpecName: "scripts") pod "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" (UID: "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.385366 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" (UID: "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.458790 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" (UID: "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.469545 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.469600 4793 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.469613 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.469625 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvr52\" (UniqueName: \"kubernetes.io/projected/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-kube-api-access-bvr52\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.574184 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data" (OuterVolumeSpecName: "config-data") pod "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" (UID: "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.674558 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912395 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerID="8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4" exitCode=0 Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912441 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerID="7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156" exitCode=0 Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912451 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912465 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8","Type":"ContainerDied","Data":"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4"} Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912498 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8","Type":"ContainerDied","Data":"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156"} Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912511 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8","Type":"ContainerDied","Data":"159c1470b0ba252efe02d67b50c8e7273c57baeaea595257f321b0b7be1d2fd8"} Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912528 4793 scope.go:117] "RemoveContainer" containerID="8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.974809 4793 scope.go:117] "RemoveContainer" containerID="7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.978838 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.001103 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.002730 4793 scope.go:117] "RemoveContainer" containerID="8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4" Jan 30 14:07:36 crc kubenswrapper[4793]: E0130 14:07:36.003131 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4\": container with ID starting with 8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4 not found: ID does not exist" containerID="8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.003176 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4"} err="failed to get container status \"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4\": rpc error: code = NotFound desc = could not find container \"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4\": container with ID starting with 8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4 not found: ID does not exist" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.003218 4793 scope.go:117] "RemoveContainer" containerID="7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156" Jan 30 14:07:36 crc kubenswrapper[4793]: E0130 14:07:36.003538 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156\": container with ID starting with 7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156 not found: ID does not exist" containerID="7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.003570 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156"} err="failed to get container status \"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156\": rpc error: code = NotFound desc = could not find container \"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156\": container with ID starting with 7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156 not found: ID does not exist" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.003591 4793 scope.go:117] "RemoveContainer" containerID="8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.007142 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4"} err="failed to get container status \"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4\": rpc error: code = NotFound desc = could not find container \"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4\": container with ID starting with 8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4 not found: ID does not exist" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.007174 4793 scope.go:117] "RemoveContainer" containerID="7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.007470 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156"} err="failed to get container status \"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156\": rpc error: code = NotFound desc = could not find container \"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156\": container with ID starting with 7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156 not found: ID does not exist" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025120 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:36 crc kubenswrapper[4793]: E0130 14:07:36.025513 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="cinder-scheduler" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025530 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="cinder-scheduler" Jan 30 14:07:36 crc kubenswrapper[4793]: E0130 14:07:36.025546 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="init" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025553 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="init" Jan 30 14:07:36 crc kubenswrapper[4793]: E0130 14:07:36.025567 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="dnsmasq-dns" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025574 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="dnsmasq-dns" Jan 30 14:07:36 crc kubenswrapper[4793]: E0130 14:07:36.025593 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="probe" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025598 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="probe" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025766 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="probe" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025777 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="dnsmasq-dns" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025801 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="cinder-scheduler" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.026714 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.032282 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.033153 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.084640 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-scripts\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.084730 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p6dm\" (UniqueName: \"kubernetes.io/projected/83e26b73-5483-4b6c-88cd-5d794f14ef5a-kube-api-access-6p6dm\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.084785 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.084812 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83e26b73-5483-4b6c-88cd-5d794f14ef5a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.084842 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-config-data\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.084868 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.186095 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p6dm\" (UniqueName: \"kubernetes.io/projected/83e26b73-5483-4b6c-88cd-5d794f14ef5a-kube-api-access-6p6dm\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.186165 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.186190 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83e26b73-5483-4b6c-88cd-5d794f14ef5a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.186216 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-config-data\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.186242 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.186378 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-scripts\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.187135 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83e26b73-5483-4b6c-88cd-5d794f14ef5a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.191013 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-scripts\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.191645 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.192365 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-config-data\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.193749 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.211749 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p6dm\" (UniqueName: \"kubernetes.io/projected/83e26b73-5483-4b6c-88cd-5d794f14ef5a-kube-api-access-6p6dm\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.363234 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.408110 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" path="/var/lib/kubelet/pods/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8/volumes" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.840380 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:36 crc kubenswrapper[4793]: W0130 14:07:36.852447 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83e26b73_5483_4b6c_88cd_5d794f14ef5a.slice/crio-4b17f6f61088e29fa61e37e7348dfb7c1a407afd8d8c7ca3fb800507639af008 WatchSource:0}: Error finding container 4b17f6f61088e29fa61e37e7348dfb7c1a407afd8d8c7ca3fb800507639af008: Status 404 returned error can't find the container with id 4b17f6f61088e29fa61e37e7348dfb7c1a407afd8d8c7ca3fb800507639af008 Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.930175 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"83e26b73-5483-4b6c-88cd-5d794f14ef5a","Type":"ContainerStarted","Data":"4b17f6f61088e29fa61e37e7348dfb7c1a407afd8d8c7ca3fb800507639af008"} Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.941331 4793 generic.go:334] "Generic (PLEG): container finished" podID="16a2a816-c28c-4d74-848a-2821a9d68d70" containerID="3517173292e25a5ef43fbeee36943507781e2a1f6b290f89494c3211b1e796ba" exitCode=0 Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.941600 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9k2k7" event={"ID":"16a2a816-c28c-4d74-848a-2821a9d68d70","Type":"ContainerDied","Data":"3517173292e25a5ef43fbeee36943507781e2a1f6b290f89494c3211b1e796ba"} Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.357000 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56c564fddb-9cbqg" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:60306->10.217.0.158:9311: read: connection reset by peer" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.357574 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56c564fddb-9cbqg" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:60320->10.217.0.158:9311: read: connection reset by peer" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.779949 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.835530 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data-custom\") pod \"a2288b37-d331-4c7e-b95d-13bb4987eb75\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.835651 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-combined-ca-bundle\") pod \"a2288b37-d331-4c7e-b95d-13bb4987eb75\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.835728 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2288b37-d331-4c7e-b95d-13bb4987eb75-logs\") pod \"a2288b37-d331-4c7e-b95d-13bb4987eb75\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.835747 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data\") pod \"a2288b37-d331-4c7e-b95d-13bb4987eb75\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.835843 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zv94\" (UniqueName: \"kubernetes.io/projected/a2288b37-d331-4c7e-b95d-13bb4987eb75-kube-api-access-8zv94\") pod \"a2288b37-d331-4c7e-b95d-13bb4987eb75\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.836359 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2288b37-d331-4c7e-b95d-13bb4987eb75-logs" (OuterVolumeSpecName: "logs") pod "a2288b37-d331-4c7e-b95d-13bb4987eb75" (UID: "a2288b37-d331-4c7e-b95d-13bb4987eb75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.843836 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a2288b37-d331-4c7e-b95d-13bb4987eb75" (UID: "a2288b37-d331-4c7e-b95d-13bb4987eb75"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.846263 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2288b37-d331-4c7e-b95d-13bb4987eb75-kube-api-access-8zv94" (OuterVolumeSpecName: "kube-api-access-8zv94") pod "a2288b37-d331-4c7e-b95d-13bb4987eb75" (UID: "a2288b37-d331-4c7e-b95d-13bb4987eb75"). InnerVolumeSpecName "kube-api-access-8zv94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.867271 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2288b37-d331-4c7e-b95d-13bb4987eb75" (UID: "a2288b37-d331-4c7e-b95d-13bb4987eb75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.890442 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data" (OuterVolumeSpecName: "config-data") pod "a2288b37-d331-4c7e-b95d-13bb4987eb75" (UID: "a2288b37-d331-4c7e-b95d-13bb4987eb75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.941731 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2288b37-d331-4c7e-b95d-13bb4987eb75-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.941767 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.941779 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zv94\" (UniqueName: \"kubernetes.io/projected/a2288b37-d331-4c7e-b95d-13bb4987eb75-kube-api-access-8zv94\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.941795 4793 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.941810 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.011474 4793 generic.go:334] "Generic (PLEG): container finished" podID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerID="782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6" exitCode=0 Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.011557 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c564fddb-9cbqg" event={"ID":"a2288b37-d331-4c7e-b95d-13bb4987eb75","Type":"ContainerDied","Data":"782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6"} Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.011593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c564fddb-9cbqg" event={"ID":"a2288b37-d331-4c7e-b95d-13bb4987eb75","Type":"ContainerDied","Data":"f97b2202fc16d2a3c18bd1abd87cac5c90aa96890b8132c11e4c4e9fbac70a09"} Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.011614 4793 scope.go:117] "RemoveContainer" containerID="782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.011755 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.026117 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"83e26b73-5483-4b6c-88cd-5d794f14ef5a","Type":"ContainerStarted","Data":"9f6bf51b0d3ae3ad5c4b17a445b1872a23a3e99c9b18205de5d2846bc10811e6"} Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.061499 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56c564fddb-9cbqg"] Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.067575 4793 scope.go:117] "RemoveContainer" containerID="f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.068907 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-56c564fddb-9cbqg"] Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.132140 4793 scope.go:117] "RemoveContainer" containerID="782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6" Jan 30 14:07:38 crc kubenswrapper[4793]: E0130 14:07:38.132536 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6\": container with ID starting with 782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6 not found: ID does not exist" containerID="782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.132567 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6"} err="failed to get container status \"782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6\": rpc error: code = NotFound desc = could not find container \"782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6\": container with ID starting with 782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6 not found: ID does not exist" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.132587 4793 scope.go:117] "RemoveContainer" containerID="f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc" Jan 30 14:07:38 crc kubenswrapper[4793]: E0130 14:07:38.132892 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc\": container with ID starting with f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc not found: ID does not exist" containerID="f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.132917 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc"} err="failed to get container status \"f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc\": rpc error: code = NotFound desc = could not find container \"f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc\": container with ID starting with f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc not found: ID does not exist" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.412428 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" path="/var/lib/kubelet/pods/a2288b37-d331-4c7e-b95d-13bb4987eb75/volumes" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.564461 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.652318 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb7n6\" (UniqueName: \"kubernetes.io/projected/16a2a816-c28c-4d74-848a-2821a9d68d70-kube-api-access-mb7n6\") pod \"16a2a816-c28c-4d74-848a-2821a9d68d70\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.652405 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-config\") pod \"16a2a816-c28c-4d74-848a-2821a9d68d70\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.652631 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-combined-ca-bundle\") pod \"16a2a816-c28c-4d74-848a-2821a9d68d70\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.679702 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a2a816-c28c-4d74-848a-2821a9d68d70-kube-api-access-mb7n6" (OuterVolumeSpecName: "kube-api-access-mb7n6") pod "16a2a816-c28c-4d74-848a-2821a9d68d70" (UID: "16a2a816-c28c-4d74-848a-2821a9d68d70"). InnerVolumeSpecName "kube-api-access-mb7n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.684193 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16a2a816-c28c-4d74-848a-2821a9d68d70" (UID: "16a2a816-c28c-4d74-848a-2821a9d68d70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.703173 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-config" (OuterVolumeSpecName: "config") pod "16a2a816-c28c-4d74-848a-2821a9d68d70" (UID: "16a2a816-c28c-4d74-848a-2821a9d68d70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.754521 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.754553 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb7n6\" (UniqueName: \"kubernetes.io/projected/16a2a816-c28c-4d74-848a-2821a9d68d70-kube-api-access-mb7n6\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.754566 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.035750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9k2k7" event={"ID":"16a2a816-c28c-4d74-848a-2821a9d68d70","Type":"ContainerDied","Data":"fc613fe2ad6c1be056bd77d206032a6320f75af4b1f9de343208058c0b3d8709"} Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.035794 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc613fe2ad6c1be056bd77d206032a6320f75af4b1f9de343208058c0b3d8709" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.035857 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.045651 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"83e26b73-5483-4b6c-88cd-5d794f14ef5a","Type":"ContainerStarted","Data":"b933b510d8c79ac267ebb1c54b743d5617a150a4c0c6aa1255f3ea6f5c051ace"} Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.091233 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.091215941 podStartE2EDuration="4.091215941s" podCreationTimestamp="2026-01-30 14:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:39.090433101 +0000 UTC m=+1469.791781612" watchObservedRunningTime="2026-01-30 14:07:39.091215941 +0000 UTC m=+1469.792564432" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.146918 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t5wk9"] Jan 30 14:07:39 crc kubenswrapper[4793]: E0130 14:07:39.147370 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api-log" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.147388 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api-log" Jan 30 14:07:39 crc kubenswrapper[4793]: E0130 14:07:39.147401 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.147407 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api" Jan 30 14:07:39 crc kubenswrapper[4793]: E0130 14:07:39.147436 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a2a816-c28c-4d74-848a-2821a9d68d70" containerName="neutron-db-sync" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.147443 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a2a816-c28c-4d74-848a-2821a9d68d70" containerName="neutron-db-sync" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.147615 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.147635 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api-log" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.147653 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="16a2a816-c28c-4d74-848a-2821a9d68d70" containerName="neutron-db-sync" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.148596 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.207138 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t5wk9"] Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.251839 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75bd8998b8-27gd6"] Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.258133 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.274731 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.274989 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.275184 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.275336 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-brjvn" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.285909 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-config\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.286031 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzc2t\" (UniqueName: \"kubernetes.io/projected/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-kube-api-access-lzc2t\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.286068 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.286099 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.286156 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.286189 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.318126 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75bd8998b8-27gd6"] Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.388000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.388896 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-config\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.388848 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.388972 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-httpd-config\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389082 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-combined-ca-bundle\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389107 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389125 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389175 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-config\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389221 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc2r7\" (UniqueName: \"kubernetes.io/projected/e26816b7-89ad-4885-b481-3ae7a8ab90c4-kube-api-access-vc2r7\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389903 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389971 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-ovndb-tls-certs\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.390010 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzc2t\" (UniqueName: \"kubernetes.io/projected/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-kube-api-access-lzc2t\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.390033 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.390304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-config\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.390589 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.412862 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzc2t\" (UniqueName: \"kubernetes.io/projected/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-kube-api-access-lzc2t\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.483951 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.491377 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc2r7\" (UniqueName: \"kubernetes.io/projected/e26816b7-89ad-4885-b481-3ae7a8ab90c4-kube-api-access-vc2r7\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.491463 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-ovndb-tls-certs\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.491517 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-config\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.491538 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-httpd-config\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.491598 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-combined-ca-bundle\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.496918 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-config\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.507533 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-combined-ca-bundle\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.507611 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-httpd-config\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.510606 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-ovndb-tls-certs\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.513198 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc2r7\" (UniqueName: \"kubernetes.io/projected/e26816b7-89ad-4885-b481-3ae7a8ab90c4-kube-api-access-vc2r7\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.581721 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.845178 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 14:07:40 crc kubenswrapper[4793]: I0130 14:07:40.088323 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t5wk9"] Jan 30 14:07:40 crc kubenswrapper[4793]: W0130 14:07:40.094838 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbe3cabf_7884_41df_adac_ad1bf7e76bf9.slice/crio-067cddf5e14c681c5ac59422d446368a0d6a95f771b27ce5c72d8b49b5b509a7 WatchSource:0}: Error finding container 067cddf5e14c681c5ac59422d446368a0d6a95f771b27ce5c72d8b49b5b509a7: Status 404 returned error can't find the container with id 067cddf5e14c681c5ac59422d446368a0d6a95f771b27ce5c72d8b49b5b509a7 Jan 30 14:07:40 crc kubenswrapper[4793]: I0130 14:07:40.330925 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75bd8998b8-27gd6"] Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.065303 4793 generic.go:334] "Generic (PLEG): container finished" podID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerID="b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74" exitCode=0 Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.065398 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" event={"ID":"bbe3cabf-7884-41df-adac-ad1bf7e76bf9","Type":"ContainerDied","Data":"b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74"} Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.065689 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" event={"ID":"bbe3cabf-7884-41df-adac-ad1bf7e76bf9","Type":"ContainerStarted","Data":"067cddf5e14c681c5ac59422d446368a0d6a95f771b27ce5c72d8b49b5b509a7"} Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.067669 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd8998b8-27gd6" event={"ID":"e26816b7-89ad-4885-b481-3ae7a8ab90c4","Type":"ContainerStarted","Data":"aa6b97f9cf7eb4c606a580dd2ddef97d729ceaa61803153f00581b30e2022da8"} Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.067721 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd8998b8-27gd6" event={"ID":"e26816b7-89ad-4885-b481-3ae7a8ab90c4","Type":"ContainerStarted","Data":"9527fe1780f2fb9cca80bad053f2c7ec761fbbe892d439d87f943245f4fb87c3"} Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.067735 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd8998b8-27gd6" event={"ID":"e26816b7-89ad-4885-b481-3ae7a8ab90c4","Type":"ContainerStarted","Data":"0c2d21afdba7970d61ae9dcca3d44a8ee8d119daf524bd616f6bfe333ace90f3"} Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.067852 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.156831 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75bd8998b8-27gd6" podStartSLOduration=2.156815069 podStartE2EDuration="2.156815069s" podCreationTimestamp="2026-01-30 14:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:41.109872802 +0000 UTC m=+1471.811221323" watchObservedRunningTime="2026-01-30 14:07:41.156815069 +0000 UTC m=+1471.858163560" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.369470 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.653454 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-668ffd44cc-lhns4"] Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.659420 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.663492 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.663644 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.678740 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-668ffd44cc-lhns4"] Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760397 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbsns\" (UniqueName: \"kubernetes.io/projected/d9f34138-4dce-415b-ad20-cf0ba588f012-kube-api-access-cbsns\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760471 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-internal-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760494 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-ovndb-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760525 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-combined-ca-bundle\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760550 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-public-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-httpd-config\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760640 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-config\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862074 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbsns\" (UniqueName: \"kubernetes.io/projected/d9f34138-4dce-415b-ad20-cf0ba588f012-kube-api-access-cbsns\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862160 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-internal-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862181 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-ovndb-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862215 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-combined-ca-bundle\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862239 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-public-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862292 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-httpd-config\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862315 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-config\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.869705 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-ovndb-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.869764 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-combined-ca-bundle\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.869931 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-internal-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.872817 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-httpd-config\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.881487 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-config\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.894729 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-public-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.901809 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbsns\" (UniqueName: \"kubernetes.io/projected/d9f34138-4dce-415b-ad20-cf0ba588f012-kube-api-access-cbsns\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:42 crc kubenswrapper[4793]: I0130 14:07:42.019561 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:42 crc kubenswrapper[4793]: I0130 14:07:42.082967 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" event={"ID":"bbe3cabf-7884-41df-adac-ad1bf7e76bf9","Type":"ContainerStarted","Data":"b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9"} Jan 30 14:07:42 crc kubenswrapper[4793]: I0130 14:07:42.083616 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:42 crc kubenswrapper[4793]: I0130 14:07:42.110005 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" podStartSLOduration=3.109983014 podStartE2EDuration="3.109983014s" podCreationTimestamp="2026-01-30 14:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:42.107359461 +0000 UTC m=+1472.808707952" watchObservedRunningTime="2026-01-30 14:07:42.109983014 +0000 UTC m=+1472.811331505" Jan 30 14:07:42 crc kubenswrapper[4793]: I0130 14:07:42.679884 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-668ffd44cc-lhns4"] Jan 30 14:07:43 crc kubenswrapper[4793]: I0130 14:07:43.118872 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-668ffd44cc-lhns4" event={"ID":"d9f34138-4dce-415b-ad20-cf0ba588f012","Type":"ContainerStarted","Data":"def0e09d8215d1128f3b8d9e2dff0f499eba944c2fe283c8b19da86a92134de3"} Jan 30 14:07:43 crc kubenswrapper[4793]: I0130 14:07:43.119599 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-668ffd44cc-lhns4" event={"ID":"d9f34138-4dce-415b-ad20-cf0ba588f012","Type":"ContainerStarted","Data":"b806303ea738519210a64a9d9989bb78f1b45eb8b172fb4de474e0bcd077ca0e"} Jan 30 14:07:43 crc kubenswrapper[4793]: I0130 14:07:43.675412 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:44 crc kubenswrapper[4793]: I0130 14:07:44.121430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-668ffd44cc-lhns4" event={"ID":"d9f34138-4dce-415b-ad20-cf0ba588f012","Type":"ContainerStarted","Data":"f9985449191b4ffcd31221b22a2f985848c73964cf8516d53b7c455eec2eaab5"} Jan 30 14:07:44 crc kubenswrapper[4793]: I0130 14:07:44.121793 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:44 crc kubenswrapper[4793]: I0130 14:07:44.145669 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-668ffd44cc-lhns4" podStartSLOduration=3.145650398 podStartE2EDuration="3.145650398s" podCreationTimestamp="2026-01-30 14:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:44.14080234 +0000 UTC m=+1474.842150831" watchObservedRunningTime="2026-01-30 14:07:44.145650398 +0000 UTC m=+1474.846998889" Jan 30 14:07:44 crc kubenswrapper[4793]: I0130 14:07:44.163297 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="3105dc9e-c178-4799-a658-044d4d9b8312" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:07:44 crc kubenswrapper[4793]: I0130 14:07:44.377028 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.448672 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.450113 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.452724 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.453546 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.454158 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-68q9f" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.467003 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.542527 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.542592 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f6hs\" (UniqueName: \"kubernetes.io/projected/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-kube-api-access-6f6hs\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.542658 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.542691 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-openstack-config\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.644227 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.644283 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-openstack-config\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.644410 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.644436 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f6hs\" (UniqueName: \"kubernetes.io/projected/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-kube-api-access-6f6hs\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.645592 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-openstack-config\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.650197 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.652398 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.677666 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f6hs\" (UniqueName: \"kubernetes.io/projected/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-kube-api-access-6f6hs\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.777628 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:07:46 crc kubenswrapper[4793]: I0130 14:07:46.675982 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 14:07:46 crc kubenswrapper[4793]: I0130 14:07:46.810011 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:46 crc kubenswrapper[4793]: I0130 14:07:46.820042 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:46 crc kubenswrapper[4793]: I0130 14:07:46.843762 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 14:07:47 crc kubenswrapper[4793]: I0130 14:07:47.151419 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7","Type":"ContainerStarted","Data":"c117e8966984d1742423ebc29fafde41dbe7cdc75011c22f88b7b683046118f8"} Jan 30 14:07:49 crc kubenswrapper[4793]: I0130 14:07:49.169250 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="3105dc9e-c178-4799-a658-044d4d9b8312" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:07:49 crc kubenswrapper[4793]: I0130 14:07:49.486380 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:49 crc kubenswrapper[4793]: I0130 14:07:49.538843 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-jsbkl"] Jan 30 14:07:49 crc kubenswrapper[4793]: I0130 14:07:49.539088 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerName="dnsmasq-dns" containerID="cri-o://4e43c7a23f4a490f4a7852a2f22ad1652b89482999fbd5408077c27f4ed89f64" gracePeriod=10 Jan 30 14:07:49 crc kubenswrapper[4793]: I0130 14:07:49.838066 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.189801 4793 generic.go:334] "Generic (PLEG): container finished" podID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerID="4e43c7a23f4a490f4a7852a2f22ad1652b89482999fbd5408077c27f4ed89f64" exitCode=0 Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.189872 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" event={"ID":"2e12fa14-c592-4e14-8e7a-c02ee84cec72","Type":"ContainerDied","Data":"4e43c7a23f4a490f4a7852a2f22ad1652b89482999fbd5408077c27f4ed89f64"} Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.190146 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" event={"ID":"2e12fa14-c592-4e14-8e7a-c02ee84cec72","Type":"ContainerDied","Data":"dea9c67f4ab17b561d40848ccf607759778f130142a4dfee52cb6203cfd164a1"} Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.190159 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea9c67f4ab17b561d40848ccf607759778f130142a4dfee52cb6203cfd164a1" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.236821 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.346771 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-svc\") pod \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.346821 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-config\") pod \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.346959 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrw8b\" (UniqueName: \"kubernetes.io/projected/2e12fa14-c592-4e14-8e7a-c02ee84cec72-kube-api-access-hrw8b\") pod \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.346989 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-sb\") pod \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.347039 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-swift-storage-0\") pod \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.347118 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-nb\") pod \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.375302 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e12fa14-c592-4e14-8e7a-c02ee84cec72-kube-api-access-hrw8b" (OuterVolumeSpecName: "kube-api-access-hrw8b") pod "2e12fa14-c592-4e14-8e7a-c02ee84cec72" (UID: "2e12fa14-c592-4e14-8e7a-c02ee84cec72"). InnerVolumeSpecName "kube-api-access-hrw8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.440239 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2e12fa14-c592-4e14-8e7a-c02ee84cec72" (UID: "2e12fa14-c592-4e14-8e7a-c02ee84cec72"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.474405 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.474701 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrw8b\" (UniqueName: \"kubernetes.io/projected/2e12fa14-c592-4e14-8e7a-c02ee84cec72-kube-api-access-hrw8b\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.513267 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2e12fa14-c592-4e14-8e7a-c02ee84cec72" (UID: "2e12fa14-c592-4e14-8e7a-c02ee84cec72"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.534600 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-config" (OuterVolumeSpecName: "config") pod "2e12fa14-c592-4e14-8e7a-c02ee84cec72" (UID: "2e12fa14-c592-4e14-8e7a-c02ee84cec72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.541996 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2e12fa14-c592-4e14-8e7a-c02ee84cec72" (UID: "2e12fa14-c592-4e14-8e7a-c02ee84cec72"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.566623 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2e12fa14-c592-4e14-8e7a-c02ee84cec72" (UID: "2e12fa14-c592-4e14-8e7a-c02ee84cec72"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.576189 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.576227 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.576238 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.576247 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:51 crc kubenswrapper[4793]: I0130 14:07:51.207777 4793 generic.go:334] "Generic (PLEG): container finished" podID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerID="1a0edd78ac934a217d77619cfa86e0fdb058839606603994d0152ae52ba43266" exitCode=1 Jan 30 14:07:51 crc kubenswrapper[4793]: I0130 14:07:51.208073 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:51 crc kubenswrapper[4793]: I0130 14:07:51.209288 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerDied","Data":"1a0edd78ac934a217d77619cfa86e0fdb058839606603994d0152ae52ba43266"} Jan 30 14:07:51 crc kubenswrapper[4793]: I0130 14:07:51.209328 4793 scope.go:117] "RemoveContainer" containerID="dff5cd3a5cfaef3ae4c87e55c3563d4578820a2c23ec2494ebf248940d3816d8" Jan 30 14:07:51 crc kubenswrapper[4793]: I0130 14:07:51.344680 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-jsbkl"] Jan 30 14:07:51 crc kubenswrapper[4793]: I0130 14:07:51.351964 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-jsbkl"] Jan 30 14:07:52 crc kubenswrapper[4793]: I0130 14:07:52.217570 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerStarted","Data":"e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002"} Jan 30 14:07:52 crc kubenswrapper[4793]: I0130 14:07:52.409508 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" path="/var/lib/kubelet/pods/2e12fa14-c592-4e14-8e7a-c02ee84cec72/volumes" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.234913 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7767cf976c-8m6hn"] Jan 30 14:07:54 crc kubenswrapper[4793]: E0130 14:07:54.239478 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerName="init" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.239496 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerName="init" Jan 30 14:07:54 crc kubenswrapper[4793]: E0130 14:07:54.239525 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerName="dnsmasq-dns" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.239531 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerName="dnsmasq-dns" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.239692 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerName="dnsmasq-dns" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.240753 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.244776 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.244974 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.245125 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.256377 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7767cf976c-8m6hn"] Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351311 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbwgt\" (UniqueName: \"kubernetes.io/projected/de3851c3-345e-41a1-ad9e-ee3f4e357d85-kube-api-access-cbwgt\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351354 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-config-data\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351424 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-internal-tls-certs\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351557 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-combined-ca-bundle\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351768 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-public-tls-certs\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351885 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de3851c3-345e-41a1-ad9e-ee3f4e357d85-run-httpd\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de3851c3-345e-41a1-ad9e-ee3f4e357d85-log-httpd\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351946 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de3851c3-345e-41a1-ad9e-ee3f4e357d85-etc-swift\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.453819 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-config-data\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.453914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-internal-tls-certs\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.454003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-combined-ca-bundle\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.454075 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-public-tls-certs\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.454106 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de3851c3-345e-41a1-ad9e-ee3f4e357d85-run-httpd\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.454121 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de3851c3-345e-41a1-ad9e-ee3f4e357d85-log-httpd\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.454140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de3851c3-345e-41a1-ad9e-ee3f4e357d85-etc-swift\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.454156 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbwgt\" (UniqueName: \"kubernetes.io/projected/de3851c3-345e-41a1-ad9e-ee3f4e357d85-kube-api-access-cbwgt\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.455016 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de3851c3-345e-41a1-ad9e-ee3f4e357d85-run-httpd\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.459160 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de3851c3-345e-41a1-ad9e-ee3f4e357d85-log-httpd\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.462229 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de3851c3-345e-41a1-ad9e-ee3f4e357d85-etc-swift\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.473831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-config-data\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.474409 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-internal-tls-certs\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.474913 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-public-tls-certs\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.479581 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-combined-ca-bundle\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.480729 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbwgt\" (UniqueName: \"kubernetes.io/projected/de3851c3-345e-41a1-ad9e-ee3f4e357d85-kube-api-access-cbwgt\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.560468 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.562501 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.566033 4793 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod3ed51218-5677-4c7a-aeb6-1ec6c215178a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod3ed51218-5677-4c7a-aeb6-1ec6c215178a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod3ed51218_5677_4c7a_aeb6_1ec6c215178a.slice" Jan 30 14:07:55 crc kubenswrapper[4793]: I0130 14:07:55.764253 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:07:55 crc kubenswrapper[4793]: I0130 14:07:55.765037 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="sg-core" containerID="cri-o://4199787f9fba9bfc02645d135d0bde12d6b02a89d6508f5d6cbf72ca7396c3a8" gracePeriod=30 Jan 30 14:07:55 crc kubenswrapper[4793]: I0130 14:07:55.765157 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="proxy-httpd" containerID="cri-o://6314864eaec40aa342c30cbdd74ccf5a6317bae25e0440cf92e8eb60bfb0deb4" gracePeriod=30 Jan 30 14:07:55 crc kubenswrapper[4793]: I0130 14:07:55.765462 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-notification-agent" containerID="cri-o://1538087d2c16a6a8f0cfb34ccb93511ff0ccd4bdfcfc4ccc0a63b77916661e9e" gracePeriod=30 Jan 30 14:07:55 crc kubenswrapper[4793]: I0130 14:07:55.764730 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-central-agent" containerID="cri-o://0f0a92b67bf2c57b29668defe80c5ef06174933a3389b63d549a0beeb9490672" gracePeriod=30 Jan 30 14:07:56 crc kubenswrapper[4793]: I0130 14:07:56.268070 4793 generic.go:334] "Generic (PLEG): container finished" podID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerID="6314864eaec40aa342c30cbdd74ccf5a6317bae25e0440cf92e8eb60bfb0deb4" exitCode=0 Jan 30 14:07:56 crc kubenswrapper[4793]: I0130 14:07:56.268083 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerDied","Data":"6314864eaec40aa342c30cbdd74ccf5a6317bae25e0440cf92e8eb60bfb0deb4"} Jan 30 14:07:56 crc kubenswrapper[4793]: I0130 14:07:56.268104 4793 generic.go:334] "Generic (PLEG): container finished" podID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerID="4199787f9fba9bfc02645d135d0bde12d6b02a89d6508f5d6cbf72ca7396c3a8" exitCode=2 Jan 30 14:07:56 crc kubenswrapper[4793]: I0130 14:07:56.268127 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerDied","Data":"4199787f9fba9bfc02645d135d0bde12d6b02a89d6508f5d6cbf72ca7396c3a8"} Jan 30 14:07:57 crc kubenswrapper[4793]: I0130 14:07:57.280786 4793 generic.go:334] "Generic (PLEG): container finished" podID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerID="0f0a92b67bf2c57b29668defe80c5ef06174933a3389b63d549a0beeb9490672" exitCode=0 Jan 30 14:07:57 crc kubenswrapper[4793]: I0130 14:07:57.280956 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerDied","Data":"0f0a92b67bf2c57b29668defe80c5ef06174933a3389b63d549a0beeb9490672"} Jan 30 14:07:59 crc kubenswrapper[4793]: I0130 14:07:59.300711 4793 generic.go:334] "Generic (PLEG): container finished" podID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerID="1538087d2c16a6a8f0cfb34ccb93511ff0ccd4bdfcfc4ccc0a63b77916661e9e" exitCode=0 Jan 30 14:07:59 crc kubenswrapper[4793]: I0130 14:07:59.300790 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerDied","Data":"1538087d2c16a6a8f0cfb34ccb93511ff0ccd4bdfcfc4ccc0a63b77916661e9e"} Jan 30 14:07:59 crc kubenswrapper[4793]: I0130 14:07:59.608740 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:07:59 crc kubenswrapper[4793]: I0130 14:07:59.608802 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:08:04 crc kubenswrapper[4793]: I0130 14:08:04.838280 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:08:04 crc kubenswrapper[4793]: I0130 14:08:04.838824 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:08:04 crc kubenswrapper[4793]: I0130 14:08:04.839666 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"640bbc01e45a92a5825f900300d9f0b8086fc19b1ea387177e59aeb60ff48a32"} pod="openstack/horizon-5b9fc5f8f6-nj7xv" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 14:08:04 crc kubenswrapper[4793]: I0130 14:08:04.839709 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" containerID="cri-o://640bbc01e45a92a5825f900300d9f0b8086fc19b1ea387177e59aeb60ff48a32" gracePeriod=30 Jan 30 14:08:05 crc kubenswrapper[4793]: I0130 14:08:05.690026 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:08:05 crc kubenswrapper[4793]: I0130 14:08:05.691019 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-httpd" containerID="cri-o://031f50784319cac124ddf65fb3b891ec178d8cabb6114ad6fed4b24cfd5aa170" gracePeriod=30 Jan 30 14:08:05 crc kubenswrapper[4793]: I0130 14:08:05.691019 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-log" containerID="cri-o://dcaeea7ba1cea9514200e8739efe0c1afeee2c3dce2b9b6f14b9679193172dd8" gracePeriod=30 Jan 30 14:08:05 crc kubenswrapper[4793]: E0130 14:08:05.993413 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 30 14:08:05 crc kubenswrapper[4793]: E0130 14:08:05.993819 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cdh694h594hb8h5f7h79h544h6h5b9h64ch656h9ch55h58dh585h5dh565h75h5c6h65hc9hffh7h664h5c4h5bch678h95hb7hd6h5c6h75q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6f6hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:08:05 crc kubenswrapper[4793]: E0130 14:08:05.995179 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.386941 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerDied","Data":"d21421b35db87347d4a7181c28d855890a9a721d97cf5be20f5f36330a91c466"} Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.387327 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21421b35db87347d4a7181c28d855890a9a721d97cf5be20f5f36330a91c466" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.390746 4793 generic.go:334] "Generic (PLEG): container finished" podID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerID="dcaeea7ba1cea9514200e8739efe0c1afeee2c3dce2b9b6f14b9679193172dd8" exitCode=143 Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.392169 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5559c03d-3177-4b79-9d5b-4272abb3332c","Type":"ContainerDied","Data":"dcaeea7ba1cea9514200e8739efe0c1afeee2c3dce2b9b6f14b9679193172dd8"} Jan 30 14:08:06 crc kubenswrapper[4793]: E0130 14:08:06.395231 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.454609 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.578886 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-config-data\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.579159 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-log-httpd\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.579351 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hlzq\" (UniqueName: \"kubernetes.io/projected/45c782cb-cc45-4785-bdff-d6d9e30389e8-kube-api-access-5hlzq\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.579824 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-sg-core-conf-yaml\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.579960 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-run-httpd\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.580131 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-combined-ca-bundle\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.580205 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-scripts\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.580803 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.581711 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.587867 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45c782cb-cc45-4785-bdff-d6d9e30389e8-kube-api-access-5hlzq" (OuterVolumeSpecName: "kube-api-access-5hlzq") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "kube-api-access-5hlzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.597354 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-scripts" (OuterVolumeSpecName: "scripts") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.667713 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.681836 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.682225 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.682353 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.682416 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hlzq\" (UniqueName: \"kubernetes.io/projected/45c782cb-cc45-4785-bdff-d6d9e30389e8-kube-api-access-5hlzq\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.682478 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.688686 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7767cf976c-8m6hn"] Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.707246 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.711256 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-config-data" (OuterVolumeSpecName: "config-data") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.784328 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.784482 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.363935 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-k8j4t"] Jan 30 14:08:07 crc kubenswrapper[4793]: E0130 14:08:07.364371 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="sg-core" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364390 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="sg-core" Jan 30 14:08:07 crc kubenswrapper[4793]: E0130 14:08:07.364408 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-central-agent" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364417 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-central-agent" Jan 30 14:08:07 crc kubenswrapper[4793]: E0130 14:08:07.364440 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="proxy-httpd" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364447 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="proxy-httpd" Jan 30 14:08:07 crc kubenswrapper[4793]: E0130 14:08:07.364472 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-notification-agent" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364481 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-notification-agent" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364713 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-notification-agent" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364729 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="sg-core" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364755 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="proxy-httpd" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364768 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-central-agent" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.365483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.377230 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-k8j4t"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.418496 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.425133 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7767cf976c-8m6hn" event={"ID":"de3851c3-345e-41a1-ad9e-ee3f4e357d85","Type":"ContainerStarted","Data":"2530debb883c8718264ad859e9a7e4a811aa1f43db904ffcb018cbaf3181cc82"} Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.425206 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7767cf976c-8m6hn" event={"ID":"de3851c3-345e-41a1-ad9e-ee3f4e357d85","Type":"ContainerStarted","Data":"d3cc4543b61e25259ad21b1238264a2493c067ecc414c9ee20e5a711e20fe3f4"} Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.425223 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7767cf976c-8m6hn" event={"ID":"de3851c3-345e-41a1-ad9e-ee3f4e357d85","Type":"ContainerStarted","Data":"8a946a4833cfb767bcfbbb40705973681bed85995635fe64826cd54d06ee681d"} Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.425244 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.425259 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.483195 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7767cf976c-8m6hn" podStartSLOduration=13.483176667 podStartE2EDuration="13.483176667s" podCreationTimestamp="2026-01-30 14:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:07.464283479 +0000 UTC m=+1498.165631970" watchObservedRunningTime="2026-01-30 14:08:07.483176667 +0000 UTC m=+1498.184525158" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.492265 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.495615 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed8e6fd4-c884-4a5d-8189-3929beafa311-operator-scripts\") pod \"nova-api-db-create-k8j4t\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.495969 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2x8p\" (UniqueName: \"kubernetes.io/projected/ed8e6fd4-c884-4a5d-8189-3929beafa311-kube-api-access-l2x8p\") pod \"nova-api-db-create-k8j4t\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.506184 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.523337 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.525504 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.528575 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.532946 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.533159 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.601766 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed8e6fd4-c884-4a5d-8189-3929beafa311-operator-scripts\") pod \"nova-api-db-create-k8j4t\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.602086 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.602696 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-scripts\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.602790 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9v7\" (UniqueName: \"kubernetes.io/projected/86bca6e8-77db-4dad-a8d5-3b7718c60688-kube-api-access-bj9v7\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.602891 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-log-httpd\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.603009 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2x8p\" (UniqueName: \"kubernetes.io/projected/ed8e6fd4-c884-4a5d-8189-3929beafa311-kube-api-access-l2x8p\") pod \"nova-api-db-create-k8j4t\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.603182 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-config-data\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.603330 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-run-httpd\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.603412 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.604883 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed8e6fd4-c884-4a5d-8189-3929beafa311-operator-scripts\") pod \"nova-api-db-create-k8j4t\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.610178 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-n6kxs"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.612299 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.627354 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-n6kxs"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.660881 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2x8p\" (UniqueName: \"kubernetes.io/projected/ed8e6fd4-c884-4a5d-8189-3929beafa311-kube-api-access-l2x8p\") pod \"nova-api-db-create-k8j4t\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.682994 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.701294 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6ttpt"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.702421 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708022 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vktr4\" (UniqueName: \"kubernetes.io/projected/6a263a6b-c717-4bb9-ae46-edfd534e347f-kube-api-access-vktr4\") pod \"nova-cell0-db-create-n6kxs\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708102 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj9v7\" (UniqueName: \"kubernetes.io/projected/86bca6e8-77db-4dad-a8d5-3b7718c60688-kube-api-access-bj9v7\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708127 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-scripts\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708164 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-log-httpd\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708217 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-config-data\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708260 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a263a6b-c717-4bb9-ae46-edfd534e347f-operator-scripts\") pod \"nova-cell0-db-create-n6kxs\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708280 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-run-httpd\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708295 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708376 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.709413 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-log-httpd\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.714741 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.714993 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-run-httpd\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.717705 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-scripts\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.725111 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-config-data\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.732344 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.735518 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj9v7\" (UniqueName: \"kubernetes.io/projected/86bca6e8-77db-4dad-a8d5-3b7718c60688-kube-api-access-bj9v7\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.765221 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-5737-account-create-update-7wpgl"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.776193 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.780119 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6ttpt"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.784346 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.810038 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vktr4\" (UniqueName: \"kubernetes.io/projected/6a263a6b-c717-4bb9-ae46-edfd534e347f-kube-api-access-vktr4\") pod \"nova-cell0-db-create-n6kxs\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.810429 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-operator-scripts\") pod \"nova-cell1-db-create-6ttpt\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.810555 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a263a6b-c717-4bb9-ae46-edfd534e347f-operator-scripts\") pod \"nova-cell0-db-create-n6kxs\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.810684 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm9cg\" (UniqueName: \"kubernetes.io/projected/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-kube-api-access-lm9cg\") pod \"nova-cell1-db-create-6ttpt\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.811359 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a263a6b-c717-4bb9-ae46-edfd534e347f-operator-scripts\") pod \"nova-cell0-db-create-n6kxs\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.836344 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5737-account-create-update-7wpgl"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.855402 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.860777 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vktr4\" (UniqueName: \"kubernetes.io/projected/6a263a6b-c717-4bb9-ae46-edfd534e347f-kube-api-access-vktr4\") pod \"nova-cell0-db-create-n6kxs\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.921169 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-operator-scripts\") pod \"nova-cell1-db-create-6ttpt\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.921229 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfvh8\" (UniqueName: \"kubernetes.io/projected/20523849-0caa-42b2-9b52-d5661f90ea95-kube-api-access-nfvh8\") pod \"nova-api-5737-account-create-update-7wpgl\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.921270 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20523849-0caa-42b2-9b52-d5661f90ea95-operator-scripts\") pod \"nova-api-5737-account-create-update-7wpgl\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.921338 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm9cg\" (UniqueName: \"kubernetes.io/projected/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-kube-api-access-lm9cg\") pod \"nova-cell1-db-create-6ttpt\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.922318 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-operator-scripts\") pod \"nova-cell1-db-create-6ttpt\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.963478 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.974013 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm9cg\" (UniqueName: \"kubernetes.io/projected/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-kube-api-access-lm9cg\") pod \"nova-cell1-db-create-6ttpt\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.992435 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-a772-account-create-update-4n7jm"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.993632 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.016368 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a772-account-create-update-4n7jm"] Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.024326 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfvh8\" (UniqueName: \"kubernetes.io/projected/20523849-0caa-42b2-9b52-d5661f90ea95-kube-api-access-nfvh8\") pod \"nova-api-5737-account-create-update-7wpgl\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.038648 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20523849-0caa-42b2-9b52-d5661f90ea95-operator-scripts\") pod \"nova-api-5737-account-create-update-7wpgl\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.040015 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20523849-0caa-42b2-9b52-d5661f90ea95-operator-scripts\") pod \"nova-api-5737-account-create-update-7wpgl\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.028684 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.077206 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfvh8\" (UniqueName: \"kubernetes.io/projected/20523849-0caa-42b2-9b52-d5661f90ea95-kube-api-access-nfvh8\") pod \"nova-api-5737-account-create-update-7wpgl\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.144000 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9xg\" (UniqueName: \"kubernetes.io/projected/aec60191-c8b7-4d7a-a69f-765a9652878b-kube-api-access-zf9xg\") pod \"nova-cell0-a772-account-create-update-4n7jm\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.144125 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec60191-c8b7-4d7a-a69f-765a9652878b-operator-scripts\") pod \"nova-cell0-a772-account-create-update-4n7jm\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.159470 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.160031 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.245625 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9xg\" (UniqueName: \"kubernetes.io/projected/aec60191-c8b7-4d7a-a69f-765a9652878b-kube-api-access-zf9xg\") pod \"nova-cell0-a772-account-create-update-4n7jm\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.245764 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec60191-c8b7-4d7a-a69f-765a9652878b-operator-scripts\") pod \"nova-cell0-a772-account-create-update-4n7jm\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.246825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec60191-c8b7-4d7a-a69f-765a9652878b-operator-scripts\") pod \"nova-cell0-a772-account-create-update-4n7jm\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.283886 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9xg\" (UniqueName: \"kubernetes.io/projected/aec60191-c8b7-4d7a-a69f-765a9652878b-kube-api-access-zf9xg\") pod \"nova-cell0-a772-account-create-update-4n7jm\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.385194 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e189-account-create-update-hp64h"] Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.439624 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.442803 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.479520 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" path="/var/lib/kubelet/pods/45c782cb-cc45-4785-bdff-d6d9e30389e8/volumes" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.486225 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e189-account-create-update-hp64h"] Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.505734 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.555873 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-operator-scripts\") pod \"nova-cell1-e189-account-create-update-hp64h\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.555942 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fstbs\" (UniqueName: \"kubernetes.io/projected/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-kube-api-access-fstbs\") pod \"nova-cell1-e189-account-create-update-hp64h\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.660754 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-operator-scripts\") pod \"nova-cell1-e189-account-create-update-hp64h\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.661032 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fstbs\" (UniqueName: \"kubernetes.io/projected/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-kube-api-access-fstbs\") pod \"nova-cell1-e189-account-create-update-hp64h\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.662122 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-operator-scripts\") pod \"nova-cell1-e189-account-create-update-hp64h\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.683824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fstbs\" (UniqueName: \"kubernetes.io/projected/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-kube-api-access-fstbs\") pod \"nova-cell1-e189-account-create-update-hp64h\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.742515 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-k8j4t"] Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.805303 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.929654 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.099869 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-n6kxs"] Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.117932 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6ttpt"] Jan 30 14:08:09 crc kubenswrapper[4793]: W0130 14:08:09.165670 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22f1b95b_bf17_486c_a4b0_0a2aa96cf847.slice/crio-2331bde6c2ddaf7a832e6cb81e2fda29fa6facf6d947a44be7bfcab51ed5c22b WatchSource:0}: Error finding container 2331bde6c2ddaf7a832e6cb81e2fda29fa6facf6d947a44be7bfcab51ed5c22b: Status 404 returned error can't find the container with id 2331bde6c2ddaf7a832e6cb81e2fda29fa6facf6d947a44be7bfcab51ed5c22b Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.554537 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerStarted","Data":"4b73fadc6c8c2f194f24f28709e01df912df317bb62ccab5847b10d6fe6ae833"} Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.569108 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k8j4t" event={"ID":"ed8e6fd4-c884-4a5d-8189-3929beafa311","Type":"ContainerStarted","Data":"133cf9e3114502e1ed2ef3647567a9a7de600e92d2628121b7ac9be1e2e984c3"} Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.569152 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k8j4t" event={"ID":"ed8e6fd4-c884-4a5d-8189-3929beafa311","Type":"ContainerStarted","Data":"a273f3836de526e82dca6ed6f42af688cb27feae454dd2f42ce8b2e0b73c5dfa"} Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.581219 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6ttpt" event={"ID":"22f1b95b-bf17-486c-a4b0-0a2aa96cf847","Type":"ContainerStarted","Data":"2331bde6c2ddaf7a832e6cb81e2fda29fa6facf6d947a44be7bfcab51ed5c22b"} Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.596332 4793 generic.go:334] "Generic (PLEG): container finished" podID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerID="031f50784319cac124ddf65fb3b891ec178d8cabb6114ad6fed4b24cfd5aa170" exitCode=0 Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.596404 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5559c03d-3177-4b79-9d5b-4272abb3332c","Type":"ContainerDied","Data":"031f50784319cac124ddf65fb3b891ec178d8cabb6114ad6fed4b24cfd5aa170"} Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.598570 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5737-account-create-update-7wpgl"] Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.606152 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-n6kxs" event={"ID":"6a263a6b-c717-4bb9-ae46-edfd534e347f","Type":"ContainerStarted","Data":"204621118ed93b535a5417e9eb931e17a66ea847b73aaecad338afef5f30ccc1"} Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.622115 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a772-account-create-update-4n7jm"] Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.634324 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.146:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8080: connect: connection refused" Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.640821 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-k8j4t" podStartSLOduration=2.640798034 podStartE2EDuration="2.640798034s" podCreationTimestamp="2026-01-30 14:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:09.593822466 +0000 UTC m=+1500.295170957" watchObservedRunningTime="2026-01-30 14:08:09.640798034 +0000 UTC m=+1500.342146525" Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.641791 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.710955 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e189-account-create-update-hp64h"] Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.108020 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.274871 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-internal-tls-certs\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.274924 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-config-data\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275011 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-combined-ca-bundle\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275154 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-logs\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275181 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275235 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhczv\" (UniqueName: \"kubernetes.io/projected/5559c03d-3177-4b79-9d5b-4272abb3332c-kube-api-access-mhczv\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275312 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-scripts\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275362 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-httpd-run\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275905 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-logs" (OuterVolumeSpecName: "logs") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.277785 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.281583 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.317631 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.343779 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-scripts" (OuterVolumeSpecName: "scripts") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.343783 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5559c03d-3177-4b79-9d5b-4272abb3332c-kube-api-access-mhczv" (OuterVolumeSpecName: "kube-api-access-mhczv") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "kube-api-access-mhczv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.380476 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.380505 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhczv\" (UniqueName: \"kubernetes.io/projected/5559c03d-3177-4b79-9d5b-4272abb3332c-kube-api-access-mhczv\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.380516 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.380524 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.526246 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.529495 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-config-data" (OuterVolumeSpecName: "config-data") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.560938 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.563588 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.583791 4793 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.583825 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.583837 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.583850 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.646061 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" event={"ID":"aec60191-c8b7-4d7a-a69f-765a9652878b","Type":"ContainerStarted","Data":"2cde16956ce50cc3200c2a37b29cfb6df4e189b94634b0673b55f35da9470b1a"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.646113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" event={"ID":"aec60191-c8b7-4d7a-a69f-765a9652878b","Type":"ContainerStarted","Data":"90130b1320508cde1497dbb65370a3963dd62f09c528149c60ea7d9a6a45074b"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.667024 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e189-account-create-update-hp64h" event={"ID":"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167","Type":"ContainerStarted","Data":"28e59e6d294030a165a0e0fc52790f5c8159b9e2c9ea4959f3f53fbe499b4fb9"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.667114 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e189-account-create-update-hp64h" event={"ID":"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167","Type":"ContainerStarted","Data":"8c4348c8357b277e9a66ed81f3e268940905c66caa51a7f6288db916158e5349"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.678311 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" podStartSLOduration=3.678292542 podStartE2EDuration="3.678292542s" podCreationTimestamp="2026-01-30 14:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:10.67326825 +0000 UTC m=+1501.374616751" watchObservedRunningTime="2026-01-30 14:08:10.678292542 +0000 UTC m=+1501.379641023" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.688867 4793 generic.go:334] "Generic (PLEG): container finished" podID="ed8e6fd4-c884-4a5d-8189-3929beafa311" containerID="133cf9e3114502e1ed2ef3647567a9a7de600e92d2628121b7ac9be1e2e984c3" exitCode=0 Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.688996 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k8j4t" event={"ID":"ed8e6fd4-c884-4a5d-8189-3929beafa311","Type":"ContainerDied","Data":"133cf9e3114502e1ed2ef3647567a9a7de600e92d2628121b7ac9be1e2e984c3"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.700519 4793 generic.go:334] "Generic (PLEG): container finished" podID="22f1b95b-bf17-486c-a4b0-0a2aa96cf847" containerID="de572dff5d2f58a1803be7f7064305ab032e127eb6c4e1ab6668a1723190ad57" exitCode=0 Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.700626 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6ttpt" event={"ID":"22f1b95b-bf17-486c-a4b0-0a2aa96cf847","Type":"ContainerDied","Data":"de572dff5d2f58a1803be7f7064305ab032e127eb6c4e1ab6668a1723190ad57"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.709028 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5737-account-create-update-7wpgl" event={"ID":"20523849-0caa-42b2-9b52-d5661f90ea95","Type":"ContainerStarted","Data":"3016aa7ef767c45f0d4890b13b4c41ef50790ae3c4b545cc67b0d6c6e822f10c"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.709088 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5737-account-create-update-7wpgl" event={"ID":"20523849-0caa-42b2-9b52-d5661f90ea95","Type":"ContainerStarted","Data":"451da4a93e99f3be95f70ce67765d9ec8492af1c653717ecc19c70a1b959d011"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.712480 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-e189-account-create-update-hp64h" podStartSLOduration=2.712461729 podStartE2EDuration="2.712461729s" podCreationTimestamp="2026-01-30 14:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:10.701118515 +0000 UTC m=+1501.402467006" watchObservedRunningTime="2026-01-30 14:08:10.712461729 +0000 UTC m=+1501.413810220" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.727546 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5559c03d-3177-4b79-9d5b-4272abb3332c","Type":"ContainerDied","Data":"70a9907e2896545270e49ea508b4c54cd74205507f20d607e118c4c1d4eb4471"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.727632 4793 scope.go:117] "RemoveContainer" containerID="031f50784319cac124ddf65fb3b891ec178d8cabb6114ad6fed4b24cfd5aa170" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.727824 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.744246 4793 generic.go:334] "Generic (PLEG): container finished" podID="6a263a6b-c717-4bb9-ae46-edfd534e347f" containerID="8dcf35a2124b97e38202260bc4331118f9488517abad0d7a3392779f07bd54b6" exitCode=0 Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.744310 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-n6kxs" event={"ID":"6a263a6b-c717-4bb9-ae46-edfd534e347f","Type":"ContainerDied","Data":"8dcf35a2124b97e38202260bc4331118f9488517abad0d7a3392779f07bd54b6"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.834683 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-5737-account-create-update-7wpgl" podStartSLOduration=3.834665729 podStartE2EDuration="3.834665729s" podCreationTimestamp="2026-01-30 14:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:10.805114613 +0000 UTC m=+1501.506463124" watchObservedRunningTime="2026-01-30 14:08:10.834665729 +0000 UTC m=+1501.536014210" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.035086 4793 scope.go:117] "RemoveContainer" containerID="dcaeea7ba1cea9514200e8739efe0c1afeee2c3dce2b9b6f14b9679193172dd8" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.151596 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.209356 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.220117 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:08:11 crc kubenswrapper[4793]: E0130 14:08:11.220712 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-httpd" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.220739 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-httpd" Jan 30 14:08:11 crc kubenswrapper[4793]: E0130 14:08:11.220766 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-log" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.220774 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-log" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.220985 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-log" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.221023 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-httpd" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.222525 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.231230 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.231481 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.236309 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325224 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325311 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f96d1ae8-18a5-4651-b460-21e9ddb50684-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325340 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r47p5\" (UniqueName: \"kubernetes.io/projected/f96d1ae8-18a5-4651-b460-21e9ddb50684-kube-api-access-r47p5\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325402 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325437 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325462 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325499 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325536 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f96d1ae8-18a5-4651-b460-21e9ddb50684-logs\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429405 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429473 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f96d1ae8-18a5-4651-b460-21e9ddb50684-logs\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429632 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429689 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r47p5\" (UniqueName: \"kubernetes.io/projected/f96d1ae8-18a5-4651-b460-21e9ddb50684-kube-api-access-r47p5\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f96d1ae8-18a5-4651-b460-21e9ddb50684-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429783 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429818 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429839 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.430037 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.432035 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f96d1ae8-18a5-4651-b460-21e9ddb50684-logs\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.432532 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f96d1ae8-18a5-4651-b460-21e9ddb50684-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.448455 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.449366 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.450269 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.460606 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.472805 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r47p5\" (UniqueName: \"kubernetes.io/projected/f96d1ae8-18a5-4651-b460-21e9ddb50684-kube-api-access-r47p5\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.499495 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.570758 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.768713 4793 generic.go:334] "Generic (PLEG): container finished" podID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerID="640bbc01e45a92a5825f900300d9f0b8086fc19b1ea387177e59aeb60ff48a32" exitCode=0 Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.768774 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerDied","Data":"640bbc01e45a92a5825f900300d9f0b8086fc19b1ea387177e59aeb60ff48a32"} Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.769009 4793 scope.go:117] "RemoveContainer" containerID="f596f8243d020ebc541370451531edeb9f8ca985e2b5b436a6b072092db3b9f8" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.780889 4793 generic.go:334] "Generic (PLEG): container finished" podID="aec60191-c8b7-4d7a-a69f-765a9652878b" containerID="2cde16956ce50cc3200c2a37b29cfb6df4e189b94634b0673b55f35da9470b1a" exitCode=0 Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.781160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" event={"ID":"aec60191-c8b7-4d7a-a69f-765a9652878b","Type":"ContainerDied","Data":"2cde16956ce50cc3200c2a37b29cfb6df4e189b94634b0673b55f35da9470b1a"} Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.785974 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerStarted","Data":"770623c7f72dcc371d6d0f171741332c80551d1140706f6273b2e8ffc6402658"} Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.788165 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" containerID="28e59e6d294030a165a0e0fc52790f5c8159b9e2c9ea4959f3f53fbe499b4fb9" exitCode=0 Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.788224 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e189-account-create-update-hp64h" event={"ID":"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167","Type":"ContainerDied","Data":"28e59e6d294030a165a0e0fc52790f5c8159b9e2c9ea4959f3f53fbe499b4fb9"} Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.050855 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.120221 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75bd8998b8-27gd6"] Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.120496 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75bd8998b8-27gd6" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-api" containerID="cri-o://9527fe1780f2fb9cca80bad053f2c7ec761fbbe892d439d87f943245f4fb87c3" gracePeriod=30 Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.120913 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75bd8998b8-27gd6" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-httpd" containerID="cri-o://aa6b97f9cf7eb4c606a580dd2ddef97d729ceaa61803153f00581b30e2022da8" gracePeriod=30 Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.448384 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" path="/var/lib/kubelet/pods/5559c03d-3177-4b79-9d5b-4272abb3332c/volumes" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.734489 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.749712 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.773650 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.831684 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6ttpt" event={"ID":"22f1b95b-bf17-486c-a4b0-0a2aa96cf847","Type":"ContainerDied","Data":"2331bde6c2ddaf7a832e6cb81e2fda29fa6facf6d947a44be7bfcab51ed5c22b"} Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.831751 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2331bde6c2ddaf7a832e6cb81e2fda29fa6facf6d947a44be7bfcab51ed5c22b" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.831851 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.852882 4793 generic.go:334] "Generic (PLEG): container finished" podID="20523849-0caa-42b2-9b52-d5661f90ea95" containerID="3016aa7ef767c45f0d4890b13b4c41ef50790ae3c4b545cc67b0d6c6e822f10c" exitCode=0 Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.852975 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5737-account-create-update-7wpgl" event={"ID":"20523849-0caa-42b2-9b52-d5661f90ea95","Type":"ContainerDied","Data":"3016aa7ef767c45f0d4890b13b4c41ef50790ae3c4b545cc67b0d6c6e822f10c"} Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.871599 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-operator-scripts\") pod \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.871665 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed8e6fd4-c884-4a5d-8189-3929beafa311-operator-scripts\") pod \"ed8e6fd4-c884-4a5d-8189-3929beafa311\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.871767 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2x8p\" (UniqueName: \"kubernetes.io/projected/ed8e6fd4-c884-4a5d-8189-3929beafa311-kube-api-access-l2x8p\") pod \"ed8e6fd4-c884-4a5d-8189-3929beafa311\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.871849 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a263a6b-c717-4bb9-ae46-edfd534e347f-operator-scripts\") pod \"6a263a6b-c717-4bb9-ae46-edfd534e347f\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.871933 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm9cg\" (UniqueName: \"kubernetes.io/projected/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-kube-api-access-lm9cg\") pod \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.871987 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vktr4\" (UniqueName: \"kubernetes.io/projected/6a263a6b-c717-4bb9-ae46-edfd534e347f-kube-api-access-vktr4\") pod \"6a263a6b-c717-4bb9-ae46-edfd534e347f\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.880360 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22f1b95b-bf17-486c-a4b0-0a2aa96cf847" (UID: "22f1b95b-bf17-486c-a4b0-0a2aa96cf847"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.880780 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a263a6b-c717-4bb9-ae46-edfd534e347f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a263a6b-c717-4bb9-ae46-edfd534e347f" (UID: "6a263a6b-c717-4bb9-ae46-edfd534e347f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.881319 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed8e6fd4-c884-4a5d-8189-3929beafa311-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed8e6fd4-c884-4a5d-8189-3929beafa311" (UID: "ed8e6fd4-c884-4a5d-8189-3929beafa311"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.887187 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a263a6b-c717-4bb9-ae46-edfd534e347f-kube-api-access-vktr4" (OuterVolumeSpecName: "kube-api-access-vktr4") pod "6a263a6b-c717-4bb9-ae46-edfd534e347f" (UID: "6a263a6b-c717-4bb9-ae46-edfd534e347f"). InnerVolumeSpecName "kube-api-access-vktr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.890317 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed8e6fd4-c884-4a5d-8189-3929beafa311-kube-api-access-l2x8p" (OuterVolumeSpecName: "kube-api-access-l2x8p") pod "ed8e6fd4-c884-4a5d-8189-3929beafa311" (UID: "ed8e6fd4-c884-4a5d-8189-3929beafa311"). InnerVolumeSpecName "kube-api-access-l2x8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.892885 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-kube-api-access-lm9cg" (OuterVolumeSpecName: "kube-api-access-lm9cg") pod "22f1b95b-bf17-486c-a4b0-0a2aa96cf847" (UID: "22f1b95b-bf17-486c-a4b0-0a2aa96cf847"). InnerVolumeSpecName "kube-api-access-lm9cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.892978 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-n6kxs" event={"ID":"6a263a6b-c717-4bb9-ae46-edfd534e347f","Type":"ContainerDied","Data":"204621118ed93b535a5417e9eb931e17a66ea847b73aaecad338afef5f30ccc1"} Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.893015 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="204621118ed93b535a5417e9eb931e17a66ea847b73aaecad338afef5f30ccc1" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.893097 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.938348 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.944585 4793 generic.go:334] "Generic (PLEG): container finished" podID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerID="aa6b97f9cf7eb4c606a580dd2ddef97d729ceaa61803153f00581b30e2022da8" exitCode=0 Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.944668 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd8998b8-27gd6" event={"ID":"e26816b7-89ad-4885-b481-3ae7a8ab90c4","Type":"ContainerDied","Data":"aa6b97f9cf7eb4c606a580dd2ddef97d729ceaa61803153f00581b30e2022da8"} Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.966306 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerStarted","Data":"d2335cce21b11d1ab56e3ad35e0c55bce3cf69e2db057d909aa07232df9135ae"} Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.986603 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.987092 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed8e6fd4-c884-4a5d-8189-3929beafa311-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.987187 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2x8p\" (UniqueName: \"kubernetes.io/projected/ed8e6fd4-c884-4a5d-8189-3929beafa311-kube-api-access-l2x8p\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.987282 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a263a6b-c717-4bb9-ae46-edfd534e347f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.987380 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm9cg\" (UniqueName: \"kubernetes.io/projected/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-kube-api-access-lm9cg\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.987459 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vktr4\" (UniqueName: \"kubernetes.io/projected/6a263a6b-c717-4bb9-ae46-edfd534e347f-kube-api-access-vktr4\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.992112 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:12.993416 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k8j4t" event={"ID":"ed8e6fd4-c884-4a5d-8189-3929beafa311","Type":"ContainerDied","Data":"a273f3836de526e82dca6ed6f42af688cb27feae454dd2f42ce8b2e0b73c5dfa"} Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.018927 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a273f3836de526e82dca6ed6f42af688cb27feae454dd2f42ce8b2e0b73c5dfa" Jan 30 14:08:13 crc kubenswrapper[4793]: E0130 14:08:13.108246 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode26816b7_89ad_4885_b481_3ae7a8ab90c4.slice/crio-aa6b97f9cf7eb4c606a580dd2ddef97d729ceaa61803153f00581b30e2022da8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a263a6b_c717_4bb9_ae46_edfd534e347f.slice/crio-204621118ed93b535a5417e9eb931e17a66ea847b73aaecad338afef5f30ccc1\": RecentStats: unable to find data in memory cache]" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.580629 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.728447 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf9xg\" (UniqueName: \"kubernetes.io/projected/aec60191-c8b7-4d7a-a69f-765a9652878b-kube-api-access-zf9xg\") pod \"aec60191-c8b7-4d7a-a69f-765a9652878b\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.728802 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec60191-c8b7-4d7a-a69f-765a9652878b-operator-scripts\") pod \"aec60191-c8b7-4d7a-a69f-765a9652878b\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.730128 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec60191-c8b7-4d7a-a69f-765a9652878b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aec60191-c8b7-4d7a-a69f-765a9652878b" (UID: "aec60191-c8b7-4d7a-a69f-765a9652878b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.741324 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec60191-c8b7-4d7a-a69f-765a9652878b-kube-api-access-zf9xg" (OuterVolumeSpecName: "kube-api-access-zf9xg") pod "aec60191-c8b7-4d7a-a69f-765a9652878b" (UID: "aec60191-c8b7-4d7a-a69f-765a9652878b"). InnerVolumeSpecName "kube-api-access-zf9xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.833335 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf9xg\" (UniqueName: \"kubernetes.io/projected/aec60191-c8b7-4d7a-a69f-765a9652878b-kube-api-access-zf9xg\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.833358 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec60191-c8b7-4d7a-a69f-765a9652878b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.848266 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.934542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-operator-scripts\") pod \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.934624 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fstbs\" (UniqueName: \"kubernetes.io/projected/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-kube-api-access-fstbs\") pod \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.935066 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" (UID: "8ec3637c-09ef-47f6-bce5-dcc3f4d6e167"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.938574 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-kube-api-access-fstbs" (OuterVolumeSpecName: "kube-api-access-fstbs") pod "8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" (UID: "8ec3637c-09ef-47f6-bce5-dcc3f4d6e167"). InnerVolumeSpecName "kube-api-access-fstbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.941456 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.941490 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fstbs\" (UniqueName: \"kubernetes.io/projected/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-kube-api-access-fstbs\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.025811 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerStarted","Data":"f455b4d10e53f36a56989caad1569b935b4a6126cea9aa339351b0f9175fbebd"} Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.028469 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.028485 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e189-account-create-update-hp64h" event={"ID":"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167","Type":"ContainerDied","Data":"8c4348c8357b277e9a66ed81f3e268940905c66caa51a7f6288db916158e5349"} Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.028530 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c4348c8357b277e9a66ed81f3e268940905c66caa51a7f6288db916158e5349" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.038255 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f96d1ae8-18a5-4651-b460-21e9ddb50684","Type":"ContainerStarted","Data":"01ddeb32f879e43a83e42f0d24ceaef2dc5cfaaf6a7650ad4d71889356b2adab"} Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.045995 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.046171 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" event={"ID":"aec60191-c8b7-4d7a-a69f-765a9652878b","Type":"ContainerDied","Data":"90130b1320508cde1497dbb65370a3963dd62f09c528149c60ea7d9a6a45074b"} Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.046242 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90130b1320508cde1497dbb65370a3963dd62f09c528149c60ea7d9a6a45074b" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.511954 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.592493 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.592974 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.661949 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20523849-0caa-42b2-9b52-d5661f90ea95-operator-scripts\") pod \"20523849-0caa-42b2-9b52-d5661f90ea95\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.662006 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfvh8\" (UniqueName: \"kubernetes.io/projected/20523849-0caa-42b2-9b52-d5661f90ea95-kube-api-access-nfvh8\") pod \"20523849-0caa-42b2-9b52-d5661f90ea95\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.663452 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20523849-0caa-42b2-9b52-d5661f90ea95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20523849-0caa-42b2-9b52-d5661f90ea95" (UID: "20523849-0caa-42b2-9b52-d5661f90ea95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.670427 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20523849-0caa-42b2-9b52-d5661f90ea95-kube-api-access-nfvh8" (OuterVolumeSpecName: "kube-api-access-nfvh8") pod "20523849-0caa-42b2-9b52-d5661f90ea95" (UID: "20523849-0caa-42b2-9b52-d5661f90ea95"). InnerVolumeSpecName "kube-api-access-nfvh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.768248 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20523849-0caa-42b2-9b52-d5661f90ea95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.768287 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfvh8\" (UniqueName: \"kubernetes.io/projected/20523849-0caa-42b2-9b52-d5661f90ea95-kube-api-access-nfvh8\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:15 crc kubenswrapper[4793]: I0130 14:08:15.069532 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:15 crc kubenswrapper[4793]: I0130 14:08:15.069872 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5737-account-create-update-7wpgl" event={"ID":"20523849-0caa-42b2-9b52-d5661f90ea95","Type":"ContainerDied","Data":"451da4a93e99f3be95f70ce67765d9ec8492af1c653717ecc19c70a1b959d011"} Jan 30 14:08:15 crc kubenswrapper[4793]: I0130 14:08:15.069906 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451da4a93e99f3be95f70ce67765d9ec8492af1c653717ecc19c70a1b959d011" Jan 30 14:08:15 crc kubenswrapper[4793]: I0130 14:08:15.081902 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerStarted","Data":"952ea4bae6adab4daa0b82fc192ab0083da34e2f73d1e17c743c0bc6a664325e"} Jan 30 14:08:15 crc kubenswrapper[4793]: I0130 14:08:15.092585 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f96d1ae8-18a5-4651-b460-21e9ddb50684","Type":"ContainerStarted","Data":"c59f9359bb100a7aec824b49a32eebb8648ff9a075e46ec6df4a5884b0447749"} Jan 30 14:08:16 crc kubenswrapper[4793]: I0130 14:08:16.101867 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f96d1ae8-18a5-4651-b460-21e9ddb50684","Type":"ContainerStarted","Data":"d031e9f6d658416bd44e51043a5059246e656d8e514d5c5e93d5efdadd7f1105"} Jan 30 14:08:16 crc kubenswrapper[4793]: I0130 14:08:16.125671 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.125648365 podStartE2EDuration="5.125648365s" podCreationTimestamp="2026-01-30 14:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:16.121108206 +0000 UTC m=+1506.822456707" watchObservedRunningTime="2026-01-30 14:08:16.125648365 +0000 UTC m=+1506.826996856" Jan 30 14:08:16 crc kubenswrapper[4793]: I0130 14:08:16.747075 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:08:16 crc kubenswrapper[4793]: I0130 14:08:16.747591 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-log" containerID="cri-o://d6909ec1b1d6acd6ea51f39341116d0dc581b2cb648e5824a50f0830c242d28c" gracePeriod=30 Jan 30 14:08:16 crc kubenswrapper[4793]: I0130 14:08:16.748007 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-httpd" containerID="cri-o://7fcd99ccac2b000f72be7038dcce1804ca999ec354f3fa50a7ce90a221f56951" gracePeriod=30 Jan 30 14:08:17 crc kubenswrapper[4793]: I0130 14:08:17.110268 4793 generic.go:334] "Generic (PLEG): container finished" podID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerID="d6909ec1b1d6acd6ea51f39341116d0dc581b2cb648e5824a50f0830c242d28c" exitCode=143 Jan 30 14:08:17 crc kubenswrapper[4793]: I0130 14:08:17.110356 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afd812b0-55db-4cff-b0cd-4b18afe5a4be","Type":"ContainerDied","Data":"d6909ec1b1d6acd6ea51f39341116d0dc581b2cb648e5824a50f0830c242d28c"} Jan 30 14:08:17 crc kubenswrapper[4793]: I0130 14:08:17.113226 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerStarted","Data":"78a1272f5a0efb9c0f9952508ceaecc1543daf837224cceb68be086ddee0cdbe"} Jan 30 14:08:17 crc kubenswrapper[4793]: I0130 14:08:17.113398 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:08:17 crc kubenswrapper[4793]: I0130 14:08:17.152206 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.581936098 podStartE2EDuration="10.152189828s" podCreationTimestamp="2026-01-30 14:08:07 +0000 UTC" firstStartedPulling="2026-01-30 14:08:08.95110451 +0000 UTC m=+1499.652453001" lastFinishedPulling="2026-01-30 14:08:16.52135824 +0000 UTC m=+1507.222706731" observedRunningTime="2026-01-30 14:08:17.146908451 +0000 UTC m=+1507.848256942" watchObservedRunningTime="2026-01-30 14:08:17.152189828 +0000 UTC m=+1507.853538319" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.292178 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w8lcj"] Jan 30 14:08:18 crc kubenswrapper[4793]: E0130 14:08:18.292914 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a263a6b-c717-4bb9-ae46-edfd534e347f" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.292928 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a263a6b-c717-4bb9-ae46-edfd534e347f" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: E0130 14:08:18.292945 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec60191-c8b7-4d7a-a69f-765a9652878b" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.292954 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec60191-c8b7-4d7a-a69f-765a9652878b" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: E0130 14:08:18.292965 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8e6fd4-c884-4a5d-8189-3929beafa311" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.292972 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8e6fd4-c884-4a5d-8189-3929beafa311" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: E0130 14:08:18.293002 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293010 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: E0130 14:08:18.293023 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20523849-0caa-42b2-9b52-d5661f90ea95" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293030 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="20523849-0caa-42b2-9b52-d5661f90ea95" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: E0130 14:08:18.293041 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22f1b95b-bf17-486c-a4b0-0a2aa96cf847" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293062 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="22f1b95b-bf17-486c-a4b0-0a2aa96cf847" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293249 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293273 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec60191-c8b7-4d7a-a69f-765a9652878b" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293287 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8e6fd4-c884-4a5d-8189-3929beafa311" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293303 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="22f1b95b-bf17-486c-a4b0-0a2aa96cf847" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293312 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="20523849-0caa-42b2-9b52-d5661f90ea95" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293328 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a263a6b-c717-4bb9-ae46-edfd534e347f" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.294137 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.297149 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.297548 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.297707 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rgtrf" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.309305 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w8lcj"] Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.435833 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.436121 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-scripts\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.436345 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-config-data\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.436437 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xntcf\" (UniqueName: \"kubernetes.io/projected/4ba071cd-0f26-432d-809e-709cad1a1e64-kube-api-access-xntcf\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.538451 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-scripts\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.538583 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-config-data\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.538647 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xntcf\" (UniqueName: \"kubernetes.io/projected/4ba071cd-0f26-432d-809e-709cad1a1e64-kube-api-access-xntcf\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.538737 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.576259 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-config-data\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.580582 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-scripts\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.581123 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.582591 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xntcf\" (UniqueName: \"kubernetes.io/projected/4ba071cd-0f26-432d-809e-709cad1a1e64-kube-api-access-xntcf\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.633372 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:19 crc kubenswrapper[4793]: I0130 14:08:19.238855 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w8lcj"] Jan 30 14:08:19 crc kubenswrapper[4793]: I0130 14:08:19.608855 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.146:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8080: connect: connection refused" Jan 30 14:08:19 crc kubenswrapper[4793]: I0130 14:08:19.831875 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:08:19 crc kubenswrapper[4793]: I0130 14:08:19.831973 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.179371 4793 generic.go:334] "Generic (PLEG): container finished" podID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerID="9527fe1780f2fb9cca80bad053f2c7ec761fbbe892d439d87f943245f4fb87c3" exitCode=0 Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.179427 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd8998b8-27gd6" event={"ID":"e26816b7-89ad-4885-b481-3ae7a8ab90c4","Type":"ContainerDied","Data":"9527fe1780f2fb9cca80bad053f2c7ec761fbbe892d439d87f943245f4fb87c3"} Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.179454 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd8998b8-27gd6" event={"ID":"e26816b7-89ad-4885-b481-3ae7a8ab90c4","Type":"ContainerDied","Data":"0c2d21afdba7970d61ae9dcca3d44a8ee8d119daf524bd616f6bfe333ace90f3"} Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.179465 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c2d21afdba7970d61ae9dcca3d44a8ee8d119daf524bd616f6bfe333ace90f3" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.197975 4793 generic.go:334] "Generic (PLEG): container finished" podID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerID="7fcd99ccac2b000f72be7038dcce1804ca999ec354f3fa50a7ce90a221f56951" exitCode=0 Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.198037 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afd812b0-55db-4cff-b0cd-4b18afe5a4be","Type":"ContainerDied","Data":"7fcd99ccac2b000f72be7038dcce1804ca999ec354f3fa50a7ce90a221f56951"} Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.212068 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" event={"ID":"4ba071cd-0f26-432d-809e-709cad1a1e64","Type":"ContainerStarted","Data":"10458f2044a1485dd49f34389e009c76947a11228dc091b7963498c198351281"} Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.242801 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.417810 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc2r7\" (UniqueName: \"kubernetes.io/projected/e26816b7-89ad-4885-b481-3ae7a8ab90c4-kube-api-access-vc2r7\") pod \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.417868 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-config\") pod \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.417991 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-combined-ca-bundle\") pod \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.418014 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-httpd-config\") pod \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.418030 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-ovndb-tls-certs\") pod \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.428591 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e26816b7-89ad-4885-b481-3ae7a8ab90c4" (UID: "e26816b7-89ad-4885-b481-3ae7a8ab90c4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.445235 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e26816b7-89ad-4885-b481-3ae7a8ab90c4-kube-api-access-vc2r7" (OuterVolumeSpecName: "kube-api-access-vc2r7") pod "e26816b7-89ad-4885-b481-3ae7a8ab90c4" (UID: "e26816b7-89ad-4885-b481-3ae7a8ab90c4"). InnerVolumeSpecName "kube-api-access-vc2r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.523238 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.523486 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc2r7\" (UniqueName: \"kubernetes.io/projected/e26816b7-89ad-4885-b481-3ae7a8ab90c4-kube-api-access-vc2r7\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.579505 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-config" (OuterVolumeSpecName: "config") pod "e26816b7-89ad-4885-b481-3ae7a8ab90c4" (UID: "e26816b7-89ad-4885-b481-3ae7a8ab90c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.590825 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e26816b7-89ad-4885-b481-3ae7a8ab90c4" (UID: "e26816b7-89ad-4885-b481-3ae7a8ab90c4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.611340 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e26816b7-89ad-4885-b481-3ae7a8ab90c4" (UID: "e26816b7-89ad-4885-b481-3ae7a8ab90c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.624790 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.624826 4793 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.624836 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.024013 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137556 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137605 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-logs\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137628 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-combined-ca-bundle\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137656 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44tdd\" (UniqueName: \"kubernetes.io/projected/afd812b0-55db-4cff-b0cd-4b18afe5a4be-kube-api-access-44tdd\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137674 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-public-tls-certs\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137697 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-httpd-run\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137752 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-scripts\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137797 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-config-data\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.138105 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-logs" (OuterVolumeSpecName: "logs") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.138395 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.138468 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.151577 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-scripts" (OuterVolumeSpecName: "scripts") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.152901 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.153835 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd812b0-55db-4cff-b0cd-4b18afe5a4be-kube-api-access-44tdd" (OuterVolumeSpecName: "kube-api-access-44tdd") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "kube-api-access-44tdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.232450 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.240708 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.240732 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.240743 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44tdd\" (UniqueName: \"kubernetes.io/projected/afd812b0-55db-4cff-b0cd-4b18afe5a4be-kube-api-access-44tdd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.240752 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.240760 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.271917 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.272663 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.272757 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.272771 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afd812b0-55db-4cff-b0cd-4b18afe5a4be","Type":"ContainerDied","Data":"2863a64e0737f90ead25e88cb3e95128501f7112f292e0e206879eebe7f45380"} Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.274616 4793 scope.go:117] "RemoveContainer" containerID="7fcd99ccac2b000f72be7038dcce1804ca999ec354f3fa50a7ce90a221f56951" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.295172 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.298693 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-config-data" (OuterVolumeSpecName: "config-data") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.345651 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.345686 4793 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.345699 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.348358 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75bd8998b8-27gd6"] Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.359671 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-75bd8998b8-27gd6"] Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.367247 4793 scope.go:117] "RemoveContainer" containerID="d6909ec1b1d6acd6ea51f39341116d0dc581b2cb648e5824a50f0830c242d28c" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.573083 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.573265 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.638203 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.639526 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.654725 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.661749 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:08:21 crc kubenswrapper[4793]: E0130 14:08:21.662283 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-httpd" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.662364 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-httpd" Jan 30 14:08:21 crc kubenswrapper[4793]: E0130 14:08:21.662552 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-log" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.662616 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-log" Jan 30 14:08:21 crc kubenswrapper[4793]: E0130 14:08:21.662681 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-api" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.662742 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-api" Jan 30 14:08:21 crc kubenswrapper[4793]: E0130 14:08:21.662804 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-httpd" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.662857 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-httpd" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.663099 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-api" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.663193 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-httpd" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.663271 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-log" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.668215 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-httpd" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.669538 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.669618 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.680195 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.680502 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.699686 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756344 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzzjn\" (UniqueName: \"kubernetes.io/projected/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-kube-api-access-tzzjn\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756414 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756476 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-logs\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756569 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756594 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756646 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756680 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.858792 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.859455 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.859564 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.859670 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.859777 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.859872 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.859974 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzzjn\" (UniqueName: \"kubernetes.io/projected/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-kube-api-access-tzzjn\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.860109 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.860291 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-logs\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.860780 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-logs\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.860913 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.871947 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.876762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.877299 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.877884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.909451 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzzjn\" (UniqueName: \"kubernetes.io/projected/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-kube-api-access-tzzjn\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.916686 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.995079 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:08:22 crc kubenswrapper[4793]: I0130 14:08:22.302632 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:22 crc kubenswrapper[4793]: I0130 14:08:22.302822 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:22 crc kubenswrapper[4793]: I0130 14:08:22.412204 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" path="/var/lib/kubelet/pods/afd812b0-55db-4cff-b0cd-4b18afe5a4be/volumes" Jan 30 14:08:22 crc kubenswrapper[4793]: I0130 14:08:22.412942 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" path="/var/lib/kubelet/pods/e26816b7-89ad-4885-b481-3ae7a8ab90c4/volumes" Jan 30 14:08:22 crc kubenswrapper[4793]: I0130 14:08:22.663349 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:08:22 crc kubenswrapper[4793]: W0130 14:08:22.692015 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae7d1df8_4b0f_46f7_85f4_e24fd65a919d.slice/crio-ce529dcdfc33186c49ecf9563fad5a69751d64f831df668df9d9047337f8e416 WatchSource:0}: Error finding container ce529dcdfc33186c49ecf9563fad5a69751d64f831df668df9d9047337f8e416: Status 404 returned error can't find the container with id ce529dcdfc33186c49ecf9563fad5a69751d64f831df668df9d9047337f8e416 Jan 30 14:08:23 crc kubenswrapper[4793]: I0130 14:08:23.321491 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7","Type":"ContainerStarted","Data":"96b175898b6a8155cc9b6df77597096c4715b37a8b44b1616f769e51e1320186"} Jan 30 14:08:23 crc kubenswrapper[4793]: I0130 14:08:23.329551 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d","Type":"ContainerStarted","Data":"ce529dcdfc33186c49ecf9563fad5a69751d64f831df668df9d9047337f8e416"} Jan 30 14:08:23 crc kubenswrapper[4793]: I0130 14:08:23.352906 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.459793035 podStartE2EDuration="38.352885697s" podCreationTimestamp="2026-01-30 14:07:45 +0000 UTC" firstStartedPulling="2026-01-30 14:07:46.69780082 +0000 UTC m=+1477.399149311" lastFinishedPulling="2026-01-30 14:08:22.590893482 +0000 UTC m=+1513.292241973" observedRunningTime="2026-01-30 14:08:23.339475793 +0000 UTC m=+1514.040824284" watchObservedRunningTime="2026-01-30 14:08:23.352885697 +0000 UTC m=+1514.054234188" Jan 30 14:08:24 crc kubenswrapper[4793]: I0130 14:08:24.341618 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d","Type":"ContainerStarted","Data":"503f96a09eca509027938ff0c9d0ac2065d3fbcd11bc7f66eb0d6e55bd0de7ba"} Jan 30 14:08:25 crc kubenswrapper[4793]: I0130 14:08:25.370777 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d","Type":"ContainerStarted","Data":"15d75e3810a861b7bbc46e5562fd9f0ed5fc04b9db54a0f610d1e8824d83ad3f"} Jan 30 14:08:27 crc kubenswrapper[4793]: I0130 14:08:27.415866 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:27 crc kubenswrapper[4793]: I0130 14:08:27.416936 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:08:27 crc kubenswrapper[4793]: I0130 14:08:27.429040 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:27 crc kubenswrapper[4793]: I0130 14:08:27.444095 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.444075945 podStartE2EDuration="6.444075945s" podCreationTimestamp="2026-01-30 14:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:25.398926712 +0000 UTC m=+1516.100275203" watchObservedRunningTime="2026-01-30 14:08:27.444075945 +0000 UTC m=+1518.145424436" Jan 30 14:08:28 crc kubenswrapper[4793]: I0130 14:08:28.419450 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerDied","Data":"e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002"} Jan 30 14:08:28 crc kubenswrapper[4793]: I0130 14:08:28.419771 4793 scope.go:117] "RemoveContainer" containerID="1a0edd78ac934a217d77619cfa86e0fdb058839606603994d0152ae52ba43266" Jan 30 14:08:28 crc kubenswrapper[4793]: I0130 14:08:28.420272 4793 generic.go:334] "Generic (PLEG): container finished" podID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerID="e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002" exitCode=1 Jan 30 14:08:28 crc kubenswrapper[4793]: I0130 14:08:28.420502 4793 scope.go:117] "RemoveContainer" containerID="e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002" Jan 30 14:08:28 crc kubenswrapper[4793]: E0130 14:08:28.420704 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-6b66cd9fcf-c94kp_openstack(ecab991a-220f-4b09-a1fa-f43fef3d0be5)\"" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" Jan 30 14:08:29 crc kubenswrapper[4793]: I0130 14:08:29.609236 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:08:29 crc kubenswrapper[4793]: I0130 14:08:29.609299 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:08:29 crc kubenswrapper[4793]: I0130 14:08:29.610158 4793 scope.go:117] "RemoveContainer" containerID="e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002" Jan 30 14:08:29 crc kubenswrapper[4793]: E0130 14:08:29.610403 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-6b66cd9fcf-c94kp_openstack(ecab991a-220f-4b09-a1fa-f43fef3d0be5)\"" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" Jan 30 14:08:29 crc kubenswrapper[4793]: I0130 14:08:29.839413 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 14:08:31 crc kubenswrapper[4793]: I0130 14:08:31.995502 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 14:08:31 crc kubenswrapper[4793]: I0130 14:08:31.995842 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 14:08:32 crc kubenswrapper[4793]: I0130 14:08:32.134275 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 14:08:32 crc kubenswrapper[4793]: I0130 14:08:32.134748 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 14:08:32 crc kubenswrapper[4793]: I0130 14:08:32.461268 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 14:08:32 crc kubenswrapper[4793]: I0130 14:08:32.461468 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 14:08:38 crc kubenswrapper[4793]: I0130 14:08:38.168630 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 14:08:38 crc kubenswrapper[4793]: E0130 14:08:38.495905 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Jan 30 14:08:38 crc kubenswrapper[4793]: E0130 14:08:38.496301 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xntcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-w8lcj_openstack(4ba071cd-0f26-432d-809e-709cad1a1e64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:08:38 crc kubenswrapper[4793]: E0130 14:08:38.498183 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" podUID="4ba071cd-0f26-432d-809e-709cad1a1e64" Jan 30 14:08:38 crc kubenswrapper[4793]: E0130 14:08:38.665423 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" podUID="4ba071cd-0f26-432d-809e-709cad1a1e64" Jan 30 14:08:39 crc kubenswrapper[4793]: I0130 14:08:39.831978 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.018842 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.018968 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.026131 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.300841 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.301607 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="sg-core" containerID="cri-o://952ea4bae6adab4daa0b82fc192ab0083da34e2f73d1e17c743c0bc6a664325e" gracePeriod=30 Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.301626 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-notification-agent" containerID="cri-o://f455b4d10e53f36a56989caad1569b935b4a6126cea9aa339351b0f9175fbebd" gracePeriod=30 Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.301626 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="proxy-httpd" containerID="cri-o://78a1272f5a0efb9c0f9952508ceaecc1543daf837224cceb68be086ddee0cdbe" gracePeriod=30 Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.301939 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-central-agent" containerID="cri-o://770623c7f72dcc371d6d0f171741332c80551d1140706f6273b2e8ffc6402658" gracePeriod=30 Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.547328 4793 generic.go:334] "Generic (PLEG): container finished" podID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerID="952ea4bae6adab4daa0b82fc192ab0083da34e2f73d1e17c743c0bc6a664325e" exitCode=2 Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.547369 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerDied","Data":"952ea4bae6adab4daa0b82fc192ab0083da34e2f73d1e17c743c0bc6a664325e"} Jan 30 14:08:41 crc kubenswrapper[4793]: I0130 14:08:41.398929 4793 scope.go:117] "RemoveContainer" containerID="e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002" Jan 30 14:08:41 crc kubenswrapper[4793]: E0130 14:08:41.400478 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-6b66cd9fcf-c94kp_openstack(ecab991a-220f-4b09-a1fa-f43fef3d0be5)\"" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" Jan 30 14:08:41 crc kubenswrapper[4793]: I0130 14:08:41.558316 4793 generic.go:334] "Generic (PLEG): container finished" podID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerID="78a1272f5a0efb9c0f9952508ceaecc1543daf837224cceb68be086ddee0cdbe" exitCode=0 Jan 30 14:08:41 crc kubenswrapper[4793]: I0130 14:08:41.558344 4793 generic.go:334] "Generic (PLEG): container finished" podID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerID="770623c7f72dcc371d6d0f171741332c80551d1140706f6273b2e8ffc6402658" exitCode=0 Jan 30 14:08:41 crc kubenswrapper[4793]: I0130 14:08:41.558364 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerDied","Data":"78a1272f5a0efb9c0f9952508ceaecc1543daf837224cceb68be086ddee0cdbe"} Jan 30 14:08:41 crc kubenswrapper[4793]: I0130 14:08:41.558388 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerDied","Data":"770623c7f72dcc371d6d0f171741332c80551d1140706f6273b2e8ffc6402658"} Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.593518 4793 generic.go:334] "Generic (PLEG): container finished" podID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerID="f455b4d10e53f36a56989caad1569b935b4a6126cea9aa339351b0f9175fbebd" exitCode=0 Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.593678 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerDied","Data":"f455b4d10e53f36a56989caad1569b935b4a6126cea9aa339351b0f9175fbebd"} Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.727944 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.791821 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-log-httpd\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.792160 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-sg-core-conf-yaml\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.792264 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-scripts\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.797702 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj9v7\" (UniqueName: \"kubernetes.io/projected/86bca6e8-77db-4dad-a8d5-3b7718c60688-kube-api-access-bj9v7\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.797767 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-combined-ca-bundle\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.797803 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-config-data\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.797858 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-run-httpd\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.802611 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.808189 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.808901 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.808970 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.834696 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-scripts" (OuterVolumeSpecName: "scripts") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.843379 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86bca6e8-77db-4dad-a8d5-3b7718c60688-kube-api-access-bj9v7" (OuterVolumeSpecName: "kube-api-access-bj9v7") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "kube-api-access-bj9v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.914839 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj9v7\" (UniqueName: \"kubernetes.io/projected/86bca6e8-77db-4dad-a8d5-3b7718c60688-kube-api-access-bj9v7\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.914881 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.989151 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.004222 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.019420 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.019735 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.101010 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-config-data" (OuterVolumeSpecName: "config-data") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.121636 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.606716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerDied","Data":"4b73fadc6c8c2f194f24f28709e01df912df317bb62ccab5847b10d6fe6ae833"} Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.607028 4793 scope.go:117] "RemoveContainer" containerID="78a1272f5a0efb9c0f9952508ceaecc1543daf837224cceb68be086ddee0cdbe" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.606846 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.645249 4793 scope.go:117] "RemoveContainer" containerID="952ea4bae6adab4daa0b82fc192ab0083da34e2f73d1e17c743c0bc6a664325e" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.654526 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.664620 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.681919 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:43 crc kubenswrapper[4793]: E0130 14:08:43.682297 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="sg-core" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682307 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="sg-core" Jan 30 14:08:43 crc kubenswrapper[4793]: E0130 14:08:43.682323 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-central-agent" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682330 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-central-agent" Jan 30 14:08:43 crc kubenswrapper[4793]: E0130 14:08:43.682340 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="proxy-httpd" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682345 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="proxy-httpd" Jan 30 14:08:43 crc kubenswrapper[4793]: E0130 14:08:43.682364 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-notification-agent" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682370 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-notification-agent" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682547 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="sg-core" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682562 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-central-agent" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682573 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-notification-agent" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682581 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="proxy-httpd" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.685241 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.689246 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.689957 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.694195 4793 scope.go:117] "RemoveContainer" containerID="f455b4d10e53f36a56989caad1569b935b4a6126cea9aa339351b0f9175fbebd" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.732109 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.733277 4793 scope.go:117] "RemoveContainer" containerID="770623c7f72dcc371d6d0f171741332c80551d1140706f6273b2e8ffc6402658" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741353 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-log-httpd\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741405 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzc8m\" (UniqueName: \"kubernetes.io/projected/773729ea-70f7-46f4-858a-3fbbf522a4cb-kube-api-access-xzc8m\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741505 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741530 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-scripts\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741556 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-run-httpd\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741651 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-config-data\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741723 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.843717 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzc8m\" (UniqueName: \"kubernetes.io/projected/773729ea-70f7-46f4-858a-3fbbf522a4cb-kube-api-access-xzc8m\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.843823 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.843856 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-scripts\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.843891 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-run-httpd\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.843963 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-config-data\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.844018 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.844089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-log-httpd\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.844665 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-log-httpd\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.846660 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-run-httpd\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.850022 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.850951 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-config-data\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.861307 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-scripts\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.864157 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzc8m\" (UniqueName: \"kubernetes.io/projected/773729ea-70f7-46f4-858a-3fbbf522a4cb-kube-api-access-xzc8m\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.864742 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.006222 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.031661 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.066280 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.066469 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" containerName="kube-state-metrics" containerID="cri-o://7b7669483d549eb24b141c74941db71192f0f6e724c0813bbeee9ca2352f85e8" gracePeriod=30 Jan 30 14:08:44 crc kubenswrapper[4793]: E0130 14:08:44.175241 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode61af9bc_c79d_4e81_a602_37afbdc017a5.slice/crio-conmon-7b7669483d549eb24b141c74941db71192f0f6e724c0813bbeee9ca2352f85e8.scope\": RecentStats: unable to find data in memory cache]" Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.409865 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" path="/var/lib/kubelet/pods/86bca6e8-77db-4dad-a8d5-3b7718c60688/volumes" Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.626313 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.630925 4793 generic.go:334] "Generic (PLEG): container finished" podID="e61af9bc-c79d-4e81-a602-37afbdc017a5" containerID="7b7669483d549eb24b141c74941db71192f0f6e724c0813bbeee9ca2352f85e8" exitCode=2 Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.630993 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e61af9bc-c79d-4e81-a602-37afbdc017a5","Type":"ContainerDied","Data":"7b7669483d549eb24b141c74941db71192f0f6e724c0813bbeee9ca2352f85e8"} Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.878315 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.967753 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g555f\" (UniqueName: \"kubernetes.io/projected/e61af9bc-c79d-4e81-a602-37afbdc017a5-kube-api-access-g555f\") pod \"e61af9bc-c79d-4e81-a602-37afbdc017a5\" (UID: \"e61af9bc-c79d-4e81-a602-37afbdc017a5\") " Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.984804 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61af9bc-c79d-4e81-a602-37afbdc017a5-kube-api-access-g555f" (OuterVolumeSpecName: "kube-api-access-g555f") pod "e61af9bc-c79d-4e81-a602-37afbdc017a5" (UID: "e61af9bc-c79d-4e81-a602-37afbdc017a5"). InnerVolumeSpecName "kube-api-access-g555f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.069705 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g555f\" (UniqueName: \"kubernetes.io/projected/e61af9bc-c79d-4e81-a602-37afbdc017a5-kube-api-access-g555f\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.651793 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.651789 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e61af9bc-c79d-4e81-a602-37afbdc017a5","Type":"ContainerDied","Data":"71bf22217d9be03e116230139d0442df663407d89a0d201f8b40fe58cd8686cf"} Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.653072 4793 scope.go:117] "RemoveContainer" containerID="7b7669483d549eb24b141c74941db71192f0f6e724c0813bbeee9ca2352f85e8" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.656553 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerStarted","Data":"7dc962edb603898f31fe34f2b48e7775ea335507b82c1acbcf65c59db80b44b1"} Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.656604 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerStarted","Data":"eecf2aa20735ff086d97e3185c5a1181c5ec03a1c551f179de1e5ab7d6e9d69f"} Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.753097 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.780990 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.796187 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:08:45 crc kubenswrapper[4793]: E0130 14:08:45.796527 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" containerName="kube-state-metrics" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.796541 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" containerName="kube-state-metrics" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.796744 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" containerName="kube-state-metrics" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.797295 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.802509 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.802730 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.805903 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.891098 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.891365 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.891385 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.891469 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lpjr\" (UniqueName: \"kubernetes.io/projected/a3625667-be35-4d81-84f9-e00593f1c627-kube-api-access-8lpjr\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.993036 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lpjr\" (UniqueName: \"kubernetes.io/projected/a3625667-be35-4d81-84f9-e00593f1c627-kube-api-access-8lpjr\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.993362 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.993452 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.993550 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.000618 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.001167 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.001818 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.039710 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lpjr\" (UniqueName: \"kubernetes.io/projected/a3625667-be35-4d81-84f9-e00593f1c627-kube-api-access-8lpjr\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.064695 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.417767 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" path="/var/lib/kubelet/pods/e61af9bc-c79d-4e81-a602-37afbdc017a5/volumes" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.667491 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerStarted","Data":"9d8788e45690dee8efc0dfa0689f7dbbda658385cae5d1fea43716b8efad2041"} Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.774542 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:08:47 crc kubenswrapper[4793]: I0130 14:08:47.677923 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a3625667-be35-4d81-84f9-e00593f1c627","Type":"ContainerStarted","Data":"e7f9184db53386ef31e0793929c5ebc7d7e2d2ebb6c38c2a7b5886982a8e4476"} Jan 30 14:08:47 crc kubenswrapper[4793]: I0130 14:08:47.678271 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a3625667-be35-4d81-84f9-e00593f1c627","Type":"ContainerStarted","Data":"1843f750d363c51d0ba0072dae26fc1f3deb23f4082a149f1fe915f142a2a03f"} Jan 30 14:08:47 crc kubenswrapper[4793]: I0130 14:08:47.678292 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 14:08:47 crc kubenswrapper[4793]: I0130 14:08:47.683394 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerStarted","Data":"bc4b432bc8394955eab117617a3e4958a1a48374a1982d0569537d928437b6d7"} Jan 30 14:08:47 crc kubenswrapper[4793]: I0130 14:08:47.702095 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.34412437 podStartE2EDuration="2.70207739s" podCreationTimestamp="2026-01-30 14:08:45 +0000 UTC" firstStartedPulling="2026-01-30 14:08:46.75249181 +0000 UTC m=+1537.453840301" lastFinishedPulling="2026-01-30 14:08:47.11044484 +0000 UTC m=+1537.811793321" observedRunningTime="2026-01-30 14:08:47.693736248 +0000 UTC m=+1538.395084729" watchObservedRunningTime="2026-01-30 14:08:47.70207739 +0000 UTC m=+1538.403425881" Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.719451 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerStarted","Data":"7e3195426ef0018e7d03680ce368b57cacddb9796d8102941be6175b21f05dc0"} Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.721099 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-central-agent" containerID="cri-o://7dc962edb603898f31fe34f2b48e7775ea335507b82c1acbcf65c59db80b44b1" gracePeriod=30 Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.721532 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.721975 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="proxy-httpd" containerID="cri-o://7e3195426ef0018e7d03680ce368b57cacddb9796d8102941be6175b21f05dc0" gracePeriod=30 Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.722160 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="sg-core" containerID="cri-o://bc4b432bc8394955eab117617a3e4958a1a48374a1982d0569537d928437b6d7" gracePeriod=30 Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.722309 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-notification-agent" containerID="cri-o://9d8788e45690dee8efc0dfa0689f7dbbda658385cae5d1fea43716b8efad2041" gracePeriod=30 Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.730622 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.730986 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": dial tcp 10.217.0.150:9292: i/o timeout" Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.743483 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.666082597 podStartE2EDuration="7.743463451s" podCreationTimestamp="2026-01-30 14:08:43 +0000 UTC" firstStartedPulling="2026-01-30 14:08:44.646396941 +0000 UTC m=+1535.347745432" lastFinishedPulling="2026-01-30 14:08:49.723777795 +0000 UTC m=+1540.425126286" observedRunningTime="2026-01-30 14:08:50.739743871 +0000 UTC m=+1541.441092382" watchObservedRunningTime="2026-01-30 14:08:50.743463451 +0000 UTC m=+1541.444811942" Jan 30 14:08:51 crc kubenswrapper[4793]: I0130 14:08:51.728768 4793 generic.go:334] "Generic (PLEG): container finished" podID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerID="7e3195426ef0018e7d03680ce368b57cacddb9796d8102941be6175b21f05dc0" exitCode=0 Jan 30 14:08:51 crc kubenswrapper[4793]: I0130 14:08:51.729145 4793 generic.go:334] "Generic (PLEG): container finished" podID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerID="bc4b432bc8394955eab117617a3e4958a1a48374a1982d0569537d928437b6d7" exitCode=2 Jan 30 14:08:51 crc kubenswrapper[4793]: I0130 14:08:51.729156 4793 generic.go:334] "Generic (PLEG): container finished" podID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerID="9d8788e45690dee8efc0dfa0689f7dbbda658385cae5d1fea43716b8efad2041" exitCode=0 Jan 30 14:08:51 crc kubenswrapper[4793]: I0130 14:08:51.728804 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerDied","Data":"7e3195426ef0018e7d03680ce368b57cacddb9796d8102941be6175b21f05dc0"} Jan 30 14:08:51 crc kubenswrapper[4793]: I0130 14:08:51.729197 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerDied","Data":"bc4b432bc8394955eab117617a3e4958a1a48374a1982d0569537d928437b6d7"} Jan 30 14:08:51 crc kubenswrapper[4793]: I0130 14:08:51.729214 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerDied","Data":"9d8788e45690dee8efc0dfa0689f7dbbda658385cae5d1fea43716b8efad2041"} Jan 30 14:08:52 crc kubenswrapper[4793]: I0130 14:08:52.398250 4793 scope.go:117] "RemoveContainer" containerID="e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002" Jan 30 14:08:52 crc kubenswrapper[4793]: I0130 14:08:52.739478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerStarted","Data":"320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98"} Jan 30 14:08:53 crc kubenswrapper[4793]: I0130 14:08:53.310188 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:08:53 crc kubenswrapper[4793]: I0130 14:08:53.754155 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" event={"ID":"4ba071cd-0f26-432d-809e-709cad1a1e64","Type":"ContainerStarted","Data":"90b9675474db2f014b16f6ff676632a8fb2215b39c16f9464ddb8818d9838269"} Jan 30 14:08:55 crc kubenswrapper[4793]: I0130 14:08:55.402810 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:08:55 crc kubenswrapper[4793]: I0130 14:08:55.427619 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" podStartSLOduration=3.542153091 podStartE2EDuration="37.427600989s" podCreationTimestamp="2026-01-30 14:08:18 +0000 UTC" firstStartedPulling="2026-01-30 14:08:19.254835123 +0000 UTC m=+1509.956183614" lastFinishedPulling="2026-01-30 14:08:53.140283021 +0000 UTC m=+1543.841631512" observedRunningTime="2026-01-30 14:08:53.779622395 +0000 UTC m=+1544.480970886" watchObservedRunningTime="2026-01-30 14:08:55.427600989 +0000 UTC m=+1546.128949480" Jan 30 14:08:55 crc kubenswrapper[4793]: I0130 14:08:55.472801 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b66cd9fcf-c94kp"] Jan 30 14:08:55 crc kubenswrapper[4793]: I0130 14:08:55.473219 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon-log" containerID="cri-o://448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c" gracePeriod=30 Jan 30 14:08:55 crc kubenswrapper[4793]: I0130 14:08:55.473315 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" containerID="cri-o://320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98" gracePeriod=30 Jan 30 14:08:56 crc kubenswrapper[4793]: I0130 14:08:56.082393 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 14:08:56 crc kubenswrapper[4793]: I0130 14:08:56.806395 4793 generic.go:334] "Generic (PLEG): container finished" podID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerID="7dc962edb603898f31fe34f2b48e7775ea335507b82c1acbcf65c59db80b44b1" exitCode=0 Jan 30 14:08:56 crc kubenswrapper[4793]: I0130 14:08:56.806672 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerDied","Data":"7dc962edb603898f31fe34f2b48e7775ea335507b82c1acbcf65c59db80b44b1"} Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.007313 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030280 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-run-httpd\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030400 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-combined-ca-bundle\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030448 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-sg-core-conf-yaml\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030473 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzc8m\" (UniqueName: \"kubernetes.io/projected/773729ea-70f7-46f4-858a-3fbbf522a4cb-kube-api-access-xzc8m\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030513 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-config-data\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030700 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-scripts\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030752 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-log-httpd\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030805 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.031227 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.031516 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.040484 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773729ea-70f7-46f4-858a-3fbbf522a4cb-kube-api-access-xzc8m" (OuterVolumeSpecName: "kube-api-access-xzc8m") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "kube-api-access-xzc8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.066490 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-scripts" (OuterVolumeSpecName: "scripts") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.113570 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.132578 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.132821 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.132914 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.133125 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzc8m\" (UniqueName: \"kubernetes.io/projected/773729ea-70f7-46f4-858a-3fbbf522a4cb-kube-api-access-xzc8m\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.143783 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.181747 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-config-data" (OuterVolumeSpecName: "config-data") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.234471 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.234507 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.818678 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerDied","Data":"eecf2aa20735ff086d97e3185c5a1181c5ec03a1c551f179de1e5ab7d6e9d69f"} Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.818795 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.819017 4793 scope.go:117] "RemoveContainer" containerID="7e3195426ef0018e7d03680ce368b57cacddb9796d8102941be6175b21f05dc0" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.851264 4793 scope.go:117] "RemoveContainer" containerID="bc4b432bc8394955eab117617a3e4958a1a48374a1982d0569537d928437b6d7" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.867566 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.872441 4793 scope.go:117] "RemoveContainer" containerID="9d8788e45690dee8efc0dfa0689f7dbbda658385cae5d1fea43716b8efad2041" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.914391 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.955567 4793 scope.go:117] "RemoveContainer" containerID="7dc962edb603898f31fe34f2b48e7775ea335507b82c1acbcf65c59db80b44b1" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.968864 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:57 crc kubenswrapper[4793]: E0130 14:08:57.969426 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-central-agent" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.969494 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-central-agent" Jan 30 14:08:57 crc kubenswrapper[4793]: E0130 14:08:57.969574 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-notification-agent" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.969633 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-notification-agent" Jan 30 14:08:57 crc kubenswrapper[4793]: E0130 14:08:57.969685 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="proxy-httpd" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.969741 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="proxy-httpd" Jan 30 14:08:57 crc kubenswrapper[4793]: E0130 14:08:57.969847 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="sg-core" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.969951 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="sg-core" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.970202 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-central-agent" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.970298 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="proxy-httpd" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.970357 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-notification-agent" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.970423 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="sg-core" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.972091 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.975867 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.976289 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.976476 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.980369 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.158768 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.158813 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-log-httpd\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.158832 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjh8n\" (UniqueName: \"kubernetes.io/projected/a1ae5858-557d-445a-b00f-cbdc514dc672-kube-api-access-sjh8n\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.158870 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-scripts\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.158892 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.159937 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-run-httpd\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.159985 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-config-data\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.160280 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.262310 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.262997 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-log-httpd\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263162 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjh8n\" (UniqueName: \"kubernetes.io/projected/a1ae5858-557d-445a-b00f-cbdc514dc672-kube-api-access-sjh8n\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263268 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-scripts\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263348 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263453 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-run-httpd\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263525 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-config-data\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263657 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263557 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-log-httpd\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263903 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-run-httpd\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.269967 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.271483 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-config-data\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.273676 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.276474 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.281256 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjh8n\" (UniqueName: \"kubernetes.io/projected/a1ae5858-557d-445a-b00f-cbdc514dc672-kube-api-access-sjh8n\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.281351 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-scripts\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.293212 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.409838 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" path="/var/lib/kubelet/pods/773729ea-70f7-46f4-858a-3fbbf522a4cb/volumes" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.765233 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.830434 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerStarted","Data":"0a3be02686a9c4c880d5b9cfa276326d8b8efbc8e4a9d1cedd06cf45fa0269bc"} Jan 30 14:08:59 crc kubenswrapper[4793]: I0130 14:08:59.609547 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:08:59 crc kubenswrapper[4793]: I0130 14:08:59.847943 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerStarted","Data":"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f"} Jan 30 14:09:01 crc kubenswrapper[4793]: I0130 14:09:01.865911 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerStarted","Data":"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024"} Jan 30 14:09:01 crc kubenswrapper[4793]: I0130 14:09:01.866519 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerStarted","Data":"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d"} Jan 30 14:09:05 crc kubenswrapper[4793]: I0130 14:09:05.911879 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerStarted","Data":"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392"} Jan 30 14:09:05 crc kubenswrapper[4793]: I0130 14:09:05.913976 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:09:05 crc kubenswrapper[4793]: I0130 14:09:05.935360 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8506229039999997 podStartE2EDuration="8.935343374s" podCreationTimestamp="2026-01-30 14:08:57 +0000 UTC" firstStartedPulling="2026-01-30 14:08:58.75838504 +0000 UTC m=+1549.459733531" lastFinishedPulling="2026-01-30 14:09:04.8431055 +0000 UTC m=+1555.544454001" observedRunningTime="2026-01-30 14:09:05.932925195 +0000 UTC m=+1556.634273696" watchObservedRunningTime="2026-01-30 14:09:05.935343374 +0000 UTC m=+1556.636691865" Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.756239 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.756953 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-central-agent" containerID="cri-o://c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" gracePeriod=30 Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.757685 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="proxy-httpd" containerID="cri-o://325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" gracePeriod=30 Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.757726 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="sg-core" containerID="cri-o://6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" gracePeriod=30 Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.757767 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-notification-agent" containerID="cri-o://767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" gracePeriod=30 Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.965528 4793 generic.go:334] "Generic (PLEG): container finished" podID="4ba071cd-0f26-432d-809e-709cad1a1e64" containerID="90b9675474db2f014b16f6ff676632a8fb2215b39c16f9464ddb8818d9838269" exitCode=0 Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.965604 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" event={"ID":"4ba071cd-0f26-432d-809e-709cad1a1e64","Type":"ContainerDied","Data":"90b9675474db2f014b16f6ff676632a8fb2215b39c16f9464ddb8818d9838269"} Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.976213 4793 generic.go:334] "Generic (PLEG): container finished" podID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerID="6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" exitCode=2 Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.976273 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerDied","Data":"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024"} Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.574782 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.761766 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-scripts\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.761841 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-run-httpd\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.761906 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-ceilometer-tls-certs\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.762003 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-combined-ca-bundle\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.762027 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjh8n\" (UniqueName: \"kubernetes.io/projected/a1ae5858-557d-445a-b00f-cbdc514dc672-kube-api-access-sjh8n\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.762115 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-log-httpd\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.762169 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-sg-core-conf-yaml\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.762224 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-config-data\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.763629 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.763723 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.768225 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ae5858-557d-445a-b00f-cbdc514dc672-kube-api-access-sjh8n" (OuterVolumeSpecName: "kube-api-access-sjh8n") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "kube-api-access-sjh8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.768518 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-scripts" (OuterVolumeSpecName: "scripts") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.797001 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.810505 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.832973 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.860174 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-config-data" (OuterVolumeSpecName: "config-data") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864308 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864331 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864346 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864356 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjh8n\" (UniqueName: \"kubernetes.io/projected/a1ae5858-557d-445a-b00f-cbdc514dc672-kube-api-access-sjh8n\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864364 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864373 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864380 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864388 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.987326 4793 generic.go:334] "Generic (PLEG): container finished" podID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerID="325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" exitCode=0 Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.987356 4793 generic.go:334] "Generic (PLEG): container finished" podID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerID="767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" exitCode=0 Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.987366 4793 generic.go:334] "Generic (PLEG): container finished" podID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerID="c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" exitCode=0 Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.987546 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.988714 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerDied","Data":"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392"} Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.988764 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerDied","Data":"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d"} Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.988778 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerDied","Data":"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f"} Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.988789 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerDied","Data":"0a3be02686a9c4c880d5b9cfa276326d8b8efbc8e4a9d1cedd06cf45fa0269bc"} Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.988809 4793 scope.go:117] "RemoveContainer" containerID="325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.018149 4793 scope.go:117] "RemoveContainer" containerID="6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.024817 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.036024 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.055891 4793 scope.go:117] "RemoveContainer" containerID="767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061188 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.061549 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="sg-core" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061569 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="sg-core" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.061593 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="proxy-httpd" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061601 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="proxy-httpd" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.061626 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-central-agent" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061632 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-central-agent" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.061644 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-notification-agent" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061651 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-notification-agent" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061839 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-central-agent" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061866 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="proxy-httpd" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061877 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-notification-agent" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061893 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="sg-core" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.063482 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.067517 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.067600 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.067917 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.087402 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.099316 4793 scope.go:117] "RemoveContainer" containerID="c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.124531 4793 scope.go:117] "RemoveContainer" containerID="325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.125062 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": container with ID starting with 325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392 not found: ID does not exist" containerID="325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.125106 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392"} err="failed to get container status \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": rpc error: code = NotFound desc = could not find container \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": container with ID starting with 325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392 not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.125131 4793 scope.go:117] "RemoveContainer" containerID="6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.125494 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": container with ID starting with 6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024 not found: ID does not exist" containerID="6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.125529 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024"} err="failed to get container status \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": rpc error: code = NotFound desc = could not find container \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": container with ID starting with 6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024 not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.125556 4793 scope.go:117] "RemoveContainer" containerID="767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.126513 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": container with ID starting with 767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d not found: ID does not exist" containerID="767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.126540 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d"} err="failed to get container status \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": rpc error: code = NotFound desc = could not find container \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": container with ID starting with 767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.126557 4793 scope.go:117] "RemoveContainer" containerID="c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.126782 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": container with ID starting with c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f not found: ID does not exist" containerID="c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.126815 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f"} err="failed to get container status \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": rpc error: code = NotFound desc = could not find container \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": container with ID starting with c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.126843 4793 scope.go:117] "RemoveContainer" containerID="325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.127073 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392"} err="failed to get container status \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": rpc error: code = NotFound desc = could not find container \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": container with ID starting with 325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392 not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.127092 4793 scope.go:117] "RemoveContainer" containerID="6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.127273 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024"} err="failed to get container status \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": rpc error: code = NotFound desc = could not find container \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": container with ID starting with 6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024 not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.127291 4793 scope.go:117] "RemoveContainer" containerID="767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.127511 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d"} err="failed to get container status \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": rpc error: code = NotFound desc = could not find container \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": container with ID starting with 767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.127536 4793 scope.go:117] "RemoveContainer" containerID="c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.129330 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f"} err="failed to get container status \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": rpc error: code = NotFound desc = could not find container \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": container with ID starting with c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.129352 4793 scope.go:117] "RemoveContainer" containerID="325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.129677 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392"} err="failed to get container status \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": rpc error: code = NotFound desc = could not find container \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": container with ID starting with 325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392 not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.129725 4793 scope.go:117] "RemoveContainer" containerID="6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.130025 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024"} err="failed to get container status \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": rpc error: code = NotFound desc = could not find container \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": container with ID starting with 6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024 not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.130058 4793 scope.go:117] "RemoveContainer" containerID="767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.130286 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d"} err="failed to get container status \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": rpc error: code = NotFound desc = could not find container \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": container with ID starting with 767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.130309 4793 scope.go:117] "RemoveContainer" containerID="c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.130519 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f"} err="failed to get container status \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": rpc error: code = NotFound desc = could not find container \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": container with ID starting with c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.168692 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-log-httpd\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.168734 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-scripts\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.168752 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.169537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.169632 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.169672 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss2qk\" (UniqueName: \"kubernetes.io/projected/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-kube-api-access-ss2qk\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.169876 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-config-data\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.169946 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-run-httpd\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-run-httpd\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271801 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-log-httpd\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271812 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-run-httpd\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271831 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-scripts\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271871 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271927 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271951 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss2qk\" (UniqueName: \"kubernetes.io/projected/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-kube-api-access-ss2qk\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.272012 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-config-data\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.272034 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-log-httpd\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.284525 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.284736 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-scripts\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.286252 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.286593 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-config-data\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.290166 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.296503 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss2qk\" (UniqueName: \"kubernetes.io/projected/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-kube-api-access-ss2qk\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.380034 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.391144 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.476661 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-scripts\") pod \"4ba071cd-0f26-432d-809e-709cad1a1e64\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.477579 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-config-data\") pod \"4ba071cd-0f26-432d-809e-709cad1a1e64\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.477793 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xntcf\" (UniqueName: \"kubernetes.io/projected/4ba071cd-0f26-432d-809e-709cad1a1e64-kube-api-access-xntcf\") pod \"4ba071cd-0f26-432d-809e-709cad1a1e64\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.477937 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-combined-ca-bundle\") pod \"4ba071cd-0f26-432d-809e-709cad1a1e64\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.482261 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-scripts" (OuterVolumeSpecName: "scripts") pod "4ba071cd-0f26-432d-809e-709cad1a1e64" (UID: "4ba071cd-0f26-432d-809e-709cad1a1e64"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.484375 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba071cd-0f26-432d-809e-709cad1a1e64-kube-api-access-xntcf" (OuterVolumeSpecName: "kube-api-access-xntcf") pod "4ba071cd-0f26-432d-809e-709cad1a1e64" (UID: "4ba071cd-0f26-432d-809e-709cad1a1e64"). InnerVolumeSpecName "kube-api-access-xntcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.519365 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-config-data" (OuterVolumeSpecName: "config-data") pod "4ba071cd-0f26-432d-809e-709cad1a1e64" (UID: "4ba071cd-0f26-432d-809e-709cad1a1e64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.527657 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ba071cd-0f26-432d-809e-709cad1a1e64" (UID: "4ba071cd-0f26-432d-809e-709cad1a1e64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.582745 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xntcf\" (UniqueName: \"kubernetes.io/projected/4ba071cd-0f26-432d-809e-709cad1a1e64-kube-api-access-xntcf\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.583158 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.583172 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.583184 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.870629 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.876322 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.996099 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerStarted","Data":"ee58efa07fa4fa9d8d8272dc1241f3340556be6a43a1bbd522489b6d1c064654"} Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.000513 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" event={"ID":"4ba071cd-0f26-432d-809e-709cad1a1e64","Type":"ContainerDied","Data":"10458f2044a1485dd49f34389e009c76947a11228dc091b7963498c198351281"} Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.000555 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10458f2044a1485dd49f34389e009c76947a11228dc091b7963498c198351281" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.000648 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.153400 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 14:09:12 crc kubenswrapper[4793]: E0130 14:09:12.153832 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba071cd-0f26-432d-809e-709cad1a1e64" containerName="nova-cell0-conductor-db-sync" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.153854 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba071cd-0f26-432d-809e-709cad1a1e64" containerName="nova-cell0-conductor-db-sync" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.154088 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba071cd-0f26-432d-809e-709cad1a1e64" containerName="nova-cell0-conductor-db-sync" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.154778 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.160439 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.160979 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rgtrf" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.161164 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.192485 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.192977 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.193127 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kptrf\" (UniqueName: \"kubernetes.io/projected/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-kube-api-access-kptrf\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.295030 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.295135 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.295203 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kptrf\" (UniqueName: \"kubernetes.io/projected/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-kube-api-access-kptrf\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.301559 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.315153 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.318823 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kptrf\" (UniqueName: \"kubernetes.io/projected/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-kube-api-access-kptrf\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.409374 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" path="/var/lib/kubelet/pods/a1ae5858-557d-445a-b00f-cbdc514dc672/volumes" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.413792 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.413859 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.487823 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:13 crc kubenswrapper[4793]: I0130 14:09:13.005953 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 14:09:13 crc kubenswrapper[4793]: I0130 14:09:13.030826 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerStarted","Data":"14c5e5290d598f46c34890c9a841a85b87492f2237d89b7ffdeee5e8f99bb6c1"} Jan 30 14:09:14 crc kubenswrapper[4793]: I0130 14:09:14.043443 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7","Type":"ContainerStarted","Data":"892fa6c1c229b673316d98c55fca5515772f1f763e89daeb8075c544712fa9e7"} Jan 30 14:09:14 crc kubenswrapper[4793]: I0130 14:09:14.043750 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:14 crc kubenswrapper[4793]: I0130 14:09:14.043761 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7","Type":"ContainerStarted","Data":"69a0e2f160ecb9f8836eb2fb71c299df78a38363288d4d95b3e3ec748113587d"} Jan 30 14:09:14 crc kubenswrapper[4793]: I0130 14:09:14.046112 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerStarted","Data":"9d08b2914bdb19816d93c8f01afbbd1f5c6993dc4e90cc049ba23dc54276f1e5"} Jan 30 14:09:14 crc kubenswrapper[4793]: I0130 14:09:14.046153 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerStarted","Data":"35435e31f9baea1e4c9263c0e0abafdae31a9145d621c42772e5dd4993b88a8f"} Jan 30 14:09:14 crc kubenswrapper[4793]: I0130 14:09:14.069716 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.069697738 podStartE2EDuration="2.069697738s" podCreationTimestamp="2026-01-30 14:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:14.060854683 +0000 UTC m=+1564.762203184" watchObservedRunningTime="2026-01-30 14:09:14.069697738 +0000 UTC m=+1564.771046219" Jan 30 14:09:15 crc kubenswrapper[4793]: I0130 14:09:15.056372 4793 generic.go:334] "Generic (PLEG): container finished" podID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerID="320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98" exitCode=1 Jan 30 14:09:15 crc kubenswrapper[4793]: I0130 14:09:15.056570 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerDied","Data":"320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98"} Jan 30 14:09:15 crc kubenswrapper[4793]: I0130 14:09:15.057146 4793 scope.go:117] "RemoveContainer" containerID="e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002" Jan 30 14:09:17 crc kubenswrapper[4793]: I0130 14:09:17.082482 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerStarted","Data":"22e3b2b4f8af8c074e2701dd075aff341ca69019ed98db94c94c5c8c8fac5cc3"} Jan 30 14:09:17 crc kubenswrapper[4793]: I0130 14:09:17.083066 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:09:17 crc kubenswrapper[4793]: I0130 14:09:17.117500 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.987627619 podStartE2EDuration="6.117484905s" podCreationTimestamp="2026-01-30 14:09:11 +0000 UTC" firstStartedPulling="2026-01-30 14:09:11.876057997 +0000 UTC m=+1562.577406488" lastFinishedPulling="2026-01-30 14:09:16.005915283 +0000 UTC m=+1566.707263774" observedRunningTime="2026-01-30 14:09:17.112893893 +0000 UTC m=+1567.814242384" watchObservedRunningTime="2026-01-30 14:09:17.117484905 +0000 UTC m=+1567.818833396" Jan 30 14:09:22 crc kubenswrapper[4793]: I0130 14:09:22.528432 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.051605 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-75k58"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.053026 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.069880 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.070697 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.089220 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-75k58"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.156190 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxpm8\" (UniqueName: \"kubernetes.io/projected/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-kube-api-access-fxpm8\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.156236 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-scripts\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.156339 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-config-data\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.156357 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.258266 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxpm8\" (UniqueName: \"kubernetes.io/projected/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-kube-api-access-fxpm8\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.258317 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-scripts\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.258432 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-config-data\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.258463 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.267980 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.268679 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-scripts\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.277833 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-config-data\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.305762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxpm8\" (UniqueName: \"kubernetes.io/projected/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-kube-api-access-fxpm8\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.370114 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.445826 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.489365 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.495279 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.500117 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.501593 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.528809 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.540113 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.562552 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576681 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-config-data\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576718 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576752 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-config-data\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576779 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576794 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4h9t\" (UniqueName: \"kubernetes.io/projected/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-kube-api-access-t4h9t\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576817 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdgh7\" (UniqueName: \"kubernetes.io/projected/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-kube-api-access-cdgh7\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576887 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-logs\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576955 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-logs\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.690975 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-logs\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691018 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-config-data\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691057 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691096 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-config-data\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691128 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691148 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4h9t\" (UniqueName: \"kubernetes.io/projected/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-kube-api-access-t4h9t\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691175 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdgh7\" (UniqueName: \"kubernetes.io/projected/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-kube-api-access-cdgh7\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691245 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-logs\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691672 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-logs\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691921 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-logs\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.714371 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.719296 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.727272 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-config-data\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.730919 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.732641 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.744849 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4h9t\" (UniqueName: \"kubernetes.io/projected/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-kube-api-access-t4h9t\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.745112 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.782363 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-config-data\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.784660 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdgh7\" (UniqueName: \"kubernetes.io/projected/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-kube-api-access-cdgh7\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.848100 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.857531 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.858752 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.865343 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.889173 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.895447 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.899745 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjzmv\" (UniqueName: \"kubernetes.io/projected/ea153b39-273a-489d-8964-8cfddfc788e1-kube-api-access-hjzmv\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.899862 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-config-data\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.899914 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.906140 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-n2s4l"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.907867 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.913496 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.936483 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-n2s4l"] Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002033 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002098 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m4td\" (UniqueName: \"kubernetes.io/projected/946dbfc0-785c-4159-af93-83c11dd8d7e1-kube-api-access-8m4td\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002170 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002204 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-config\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002222 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjzmv\" (UniqueName: \"kubernetes.io/projected/ea153b39-273a-489d-8964-8cfddfc788e1-kube-api-access-hjzmv\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002239 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002265 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6mz\" (UniqueName: \"kubernetes.io/projected/1817ab34-b020-4268-b88c-126dc437c966-kube-api-access-nj6mz\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002316 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002337 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-svc\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002373 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002411 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-config-data\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.009415 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.013871 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-config-data\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.028313 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjzmv\" (UniqueName: \"kubernetes.io/projected/ea153b39-273a-489d-8964-8cfddfc788e1-kube-api-access-hjzmv\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.106885 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107532 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107587 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-svc\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107625 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107681 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107708 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m4td\" (UniqueName: \"kubernetes.io/projected/946dbfc0-785c-4159-af93-83c11dd8d7e1-kube-api-access-8m4td\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107746 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107777 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-config\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107796 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107818 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6mz\" (UniqueName: \"kubernetes.io/projected/1817ab34-b020-4268-b88c-126dc437c966-kube-api-access-nj6mz\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.108883 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.108983 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-svc\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.109818 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.110448 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-config\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.110979 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.120041 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.120163 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.133500 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m4td\" (UniqueName: \"kubernetes.io/projected/946dbfc0-785c-4159-af93-83c11dd8d7e1-kube-api-access-8m4td\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.141730 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6mz\" (UniqueName: \"kubernetes.io/projected/1817ab34-b020-4268-b88c-126dc437c966-kube-api-access-nj6mz\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.195800 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.224670 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-75k58"] Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.282325 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.465453 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.618162 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:24 crc kubenswrapper[4793]: W0130 14:09:24.644897 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0bc7ab8_eaab_4f40_888a_e31e12e7e773.slice/crio-0f7b1e63c6586afd494bffb3cd6108f0bd39ae0f843d930d8e6a29831d4dc1ca WatchSource:0}: Error finding container 0f7b1e63c6586afd494bffb3cd6108f0bd39ae0f843d930d8e6a29831d4dc1ca: Status 404 returned error can't find the container with id 0f7b1e63c6586afd494bffb3cd6108f0bd39ae0f843d930d8e6a29831d4dc1ca Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.845559 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:24 crc kubenswrapper[4793]: W0130 14:09:24.851512 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea153b39_273a_489d_8964_8cfddfc788e1.slice/crio-b79e4c94f61e795e4871651dd3246ac5673f935fef8bbf454e20718af00efe9b WatchSource:0}: Error finding container b79e4c94f61e795e4871651dd3246ac5673f935fef8bbf454e20718af00efe9b: Status 404 returned error can't find the container with id b79e4c94f61e795e4871651dd3246ac5673f935fef8bbf454e20718af00efe9b Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.965874 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:09:24 crc kubenswrapper[4793]: W0130 14:09:24.971054 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod946dbfc0_785c_4159_af93_83c11dd8d7e1.slice/crio-1fefeee02348cae466643167ff300193a0079c4a4093e5a2e4f25f3447fef7bf WatchSource:0}: Error finding container 1fefeee02348cae466643167ff300193a0079c4a4093e5a2e4f25f3447fef7bf: Status 404 returned error can't find the container with id 1fefeee02348cae466643167ff300193a0079c4a4093e5a2e4f25f3447fef7bf Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.060986 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ml6ks"] Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.062250 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.064787 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.064976 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.100955 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ml6ks"] Jan 30 14:09:25 crc kubenswrapper[4793]: W0130 14:09:25.131949 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1817ab34_b020_4268_b88c_126dc437c966.slice/crio-51b9f220023c2df2b6b701ab065f62d75d5f6cee33ff2d1780a9cb8c10fdb12d WatchSource:0}: Error finding container 51b9f220023c2df2b6b701ab065f62d75d5f6cee33ff2d1780a9cb8c10fdb12d: Status 404 returned error can't find the container with id 51b9f220023c2df2b6b701ab065f62d75d5f6cee33ff2d1780a9cb8c10fdb12d Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.138271 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvphb\" (UniqueName: \"kubernetes.io/projected/45bc0c92-8817-447f-a591-d593d49d1b22-kube-api-access-pvphb\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.138363 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.138398 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-scripts\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.138530 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-config-data\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.139411 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-n2s4l"] Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.188671 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"946dbfc0-785c-4159-af93-83c11dd8d7e1","Type":"ContainerStarted","Data":"1fefeee02348cae466643167ff300193a0079c4a4093e5a2e4f25f3447fef7bf"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.192777 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0bc7ab8-eaab-4f40-888a-e31e12e7e773","Type":"ContainerStarted","Data":"0f7b1e63c6586afd494bffb3cd6108f0bd39ae0f843d930d8e6a29831d4dc1ca"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.195689 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e4e3a7e-0fdd-4b58-956c-968b50689ce5","Type":"ContainerStarted","Data":"6dbb7f15722c7e00d18758c1026e64f9f4f3aa22d601bb8b93724467cdca1d2e"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.198874 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" event={"ID":"1817ab34-b020-4268-b88c-126dc437c966","Type":"ContainerStarted","Data":"51b9f220023c2df2b6b701ab065f62d75d5f6cee33ff2d1780a9cb8c10fdb12d"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.203991 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ea153b39-273a-489d-8964-8cfddfc788e1","Type":"ContainerStarted","Data":"b79e4c94f61e795e4871651dd3246ac5673f935fef8bbf454e20718af00efe9b"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.206517 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-75k58" event={"ID":"ebcc9239-aedb-41d4-bac8-d03c56c76f4a","Type":"ContainerStarted","Data":"c3407efb2fdb58b554465a66ada59f330d66ff60faa105c9e72328442584be37"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.206543 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-75k58" event={"ID":"ebcc9239-aedb-41d4-bac8-d03c56c76f4a","Type":"ContainerStarted","Data":"b0dc24251680382ac5368495457f086b3ed5dd146adcca5ddd5d5c1ebfc039cc"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.227974 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-75k58" podStartSLOduration=2.227956068 podStartE2EDuration="2.227956068s" podCreationTimestamp="2026-01-30 14:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:25.223559912 +0000 UTC m=+1575.924908403" watchObservedRunningTime="2026-01-30 14:09:25.227956068 +0000 UTC m=+1575.929304559" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.240172 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.240244 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-scripts\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.240393 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-config-data\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.240460 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvphb\" (UniqueName: \"kubernetes.io/projected/45bc0c92-8817-447f-a591-d593d49d1b22-kube-api-access-pvphb\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.245781 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-config-data\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.245927 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-scripts\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.246425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.258991 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvphb\" (UniqueName: \"kubernetes.io/projected/45bc0c92-8817-447f-a591-d593d49d1b22-kube-api-access-pvphb\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.394972 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.968116 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ml6ks"] Jan 30 14:09:25 crc kubenswrapper[4793]: W0130 14:09:25.982615 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45bc0c92_8817_447f_a591_d593d49d1b22.slice/crio-a679a51b0c6e6137e2ec5414eeb13b529804081ead1233b8cc65b0c2cf5027d0 WatchSource:0}: Error finding container a679a51b0c6e6137e2ec5414eeb13b529804081ead1233b8cc65b0c2cf5027d0: Status 404 returned error can't find the container with id a679a51b0c6e6137e2ec5414eeb13b529804081ead1233b8cc65b0c2cf5027d0 Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.097748 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.171692 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-config-data\") pod \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.171805 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-scripts\") pod \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.171880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wstbg\" (UniqueName: \"kubernetes.io/projected/ecab991a-220f-4b09-a1fa-f43fef3d0be5-kube-api-access-wstbg\") pod \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.172097 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ecab991a-220f-4b09-a1fa-f43fef3d0be5-horizon-secret-key\") pod \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.172181 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecab991a-220f-4b09-a1fa-f43fef3d0be5-logs\") pod \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.172873 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecab991a-220f-4b09-a1fa-f43fef3d0be5-logs" (OuterVolumeSpecName: "logs") pod "ecab991a-220f-4b09-a1fa-f43fef3d0be5" (UID: "ecab991a-220f-4b09-a1fa-f43fef3d0be5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.181495 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecab991a-220f-4b09-a1fa-f43fef3d0be5-kube-api-access-wstbg" (OuterVolumeSpecName: "kube-api-access-wstbg") pod "ecab991a-220f-4b09-a1fa-f43fef3d0be5" (UID: "ecab991a-220f-4b09-a1fa-f43fef3d0be5"). InnerVolumeSpecName "kube-api-access-wstbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.182253 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecab991a-220f-4b09-a1fa-f43fef3d0be5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ecab991a-220f-4b09-a1fa-f43fef3d0be5" (UID: "ecab991a-220f-4b09-a1fa-f43fef3d0be5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.212942 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-config-data" (OuterVolumeSpecName: "config-data") pod "ecab991a-220f-4b09-a1fa-f43fef3d0be5" (UID: "ecab991a-220f-4b09-a1fa-f43fef3d0be5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.225322 4793 generic.go:334] "Generic (PLEG): container finished" podID="1817ab34-b020-4268-b88c-126dc437c966" containerID="7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b" exitCode=0 Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.225410 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" event={"ID":"1817ab34-b020-4268-b88c-126dc437c966","Type":"ContainerDied","Data":"7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b"} Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.232394 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" event={"ID":"45bc0c92-8817-447f-a591-d593d49d1b22","Type":"ContainerStarted","Data":"a679a51b0c6e6137e2ec5414eeb13b529804081ead1233b8cc65b0c2cf5027d0"} Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.235701 4793 generic.go:334] "Generic (PLEG): container finished" podID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerID="448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c" exitCode=137 Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.236736 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.237056 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerDied","Data":"448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c"} Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.237392 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerDied","Data":"abb829370f6052fa5b93898ca6acb8788a4543ea051b65ba7f0f97b896bb3dd6"} Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.237495 4793 scope.go:117] "RemoveContainer" containerID="320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.241806 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-scripts" (OuterVolumeSpecName: "scripts") pod "ecab991a-220f-4b09-a1fa-f43fef3d0be5" (UID: "ecab991a-220f-4b09-a1fa-f43fef3d0be5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.274365 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wstbg\" (UniqueName: \"kubernetes.io/projected/ecab991a-220f-4b09-a1fa-f43fef3d0be5-kube-api-access-wstbg\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.274413 4793 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ecab991a-220f-4b09-a1fa-f43fef3d0be5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.274425 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecab991a-220f-4b09-a1fa-f43fef3d0be5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.274438 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.274454 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.575835 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b66cd9fcf-c94kp"] Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.593968 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b66cd9fcf-c94kp"] Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.634496 4793 scope.go:117] "RemoveContainer" containerID="448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c" Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.029977 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.043359 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.253560 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" event={"ID":"45bc0c92-8817-447f-a591-d593d49d1b22","Type":"ContainerStarted","Data":"d5dca6794b88409e9b00ca4874a836a8fc72adc63350f5d3d74d780410a0a920"} Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.568584 4793 scope.go:117] "RemoveContainer" containerID="320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98" Jan 30 14:09:27 crc kubenswrapper[4793]: E0130 14:09:27.569379 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98\": container with ID starting with 320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98 not found: ID does not exist" containerID="320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98" Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.569411 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98"} err="failed to get container status \"320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98\": rpc error: code = NotFound desc = could not find container \"320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98\": container with ID starting with 320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98 not found: ID does not exist" Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.569429 4793 scope.go:117] "RemoveContainer" containerID="448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c" Jan 30 14:09:27 crc kubenswrapper[4793]: E0130 14:09:27.570635 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c\": container with ID starting with 448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c not found: ID does not exist" containerID="448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c" Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.570685 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c"} err="failed to get container status \"448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c\": rpc error: code = NotFound desc = could not find container \"448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c\": container with ID starting with 448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c not found: ID does not exist" Jan 30 14:09:28 crc kubenswrapper[4793]: I0130 14:09:28.410013 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" path="/var/lib/kubelet/pods/ecab991a-220f-4b09-a1fa-f43fef3d0be5/volumes" Jan 30 14:09:29 crc kubenswrapper[4793]: I0130 14:09:29.278771 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" event={"ID":"1817ab34-b020-4268-b88c-126dc437c966","Type":"ContainerStarted","Data":"62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad"} Jan 30 14:09:29 crc kubenswrapper[4793]: I0130 14:09:29.279030 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:29 crc kubenswrapper[4793]: I0130 14:09:29.305463 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" podStartSLOduration=6.305443743 podStartE2EDuration="6.305443743s" podCreationTimestamp="2026-01-30 14:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:29.303265621 +0000 UTC m=+1580.004614112" watchObservedRunningTime="2026-01-30 14:09:29.305443743 +0000 UTC m=+1580.006792234" Jan 30 14:09:29 crc kubenswrapper[4793]: I0130 14:09:29.307625 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" podStartSLOduration=4.307614126 podStartE2EDuration="4.307614126s" podCreationTimestamp="2026-01-30 14:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:27.270724023 +0000 UTC m=+1577.972072504" watchObservedRunningTime="2026-01-30 14:09:29.307614126 +0000 UTC m=+1580.008962617" Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.295933 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"946dbfc0-785c-4159-af93-83c11dd8d7e1","Type":"ContainerStarted","Data":"32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d"} Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.296063 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="946dbfc0-785c-4159-af93-83c11dd8d7e1" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d" gracePeriod=30 Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.300849 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0bc7ab8-eaab-4f40-888a-e31e12e7e773","Type":"ContainerStarted","Data":"3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3"} Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.300889 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0bc7ab8-eaab-4f40-888a-e31e12e7e773","Type":"ContainerStarted","Data":"b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003"} Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.304064 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e4e3a7e-0fdd-4b58-956c-968b50689ce5","Type":"ContainerStarted","Data":"9f6ee31d211e47671b169133a4e2a9a54ed40bd52183b29bfffe92ebc8f125fa"} Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.304142 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e4e3a7e-0fdd-4b58-956c-968b50689ce5","Type":"ContainerStarted","Data":"84709903c10f8750c54fa7831d7f3c2e5b04ef1090b9b22520f4fc7ef4db1065"} Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.304261 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-log" containerID="cri-o://84709903c10f8750c54fa7831d7f3c2e5b04ef1090b9b22520f4fc7ef4db1065" gracePeriod=30 Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.304504 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-metadata" containerID="cri-o://9f6ee31d211e47671b169133a4e2a9a54ed40bd52183b29bfffe92ebc8f125fa" gracePeriod=30 Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.311569 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ea153b39-273a-489d-8964-8cfddfc788e1","Type":"ContainerStarted","Data":"aedab7e636cfadaa8cce12328c9b2c0d0677045f1058517845d9c2fc6e4ef3ee"} Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.318430 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.682230321 podStartE2EDuration="7.318413618s" podCreationTimestamp="2026-01-30 14:09:23 +0000 UTC" firstStartedPulling="2026-01-30 14:09:24.975891483 +0000 UTC m=+1575.677239984" lastFinishedPulling="2026-01-30 14:09:28.61207479 +0000 UTC m=+1579.313423281" observedRunningTime="2026-01-30 14:09:30.314267587 +0000 UTC m=+1581.015616078" watchObservedRunningTime="2026-01-30 14:09:30.318413618 +0000 UTC m=+1581.019762109" Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.339828 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.57150707 podStartE2EDuration="7.339811816s" podCreationTimestamp="2026-01-30 14:09:23 +0000 UTC" firstStartedPulling="2026-01-30 14:09:24.856111283 +0000 UTC m=+1575.557459774" lastFinishedPulling="2026-01-30 14:09:28.624416029 +0000 UTC m=+1579.325764520" observedRunningTime="2026-01-30 14:09:30.333652567 +0000 UTC m=+1581.035001058" watchObservedRunningTime="2026-01-30 14:09:30.339811816 +0000 UTC m=+1581.041160307" Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.358800 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.388411484 podStartE2EDuration="7.358781906s" podCreationTimestamp="2026-01-30 14:09:23 +0000 UTC" firstStartedPulling="2026-01-30 14:09:24.652617873 +0000 UTC m=+1575.353966364" lastFinishedPulling="2026-01-30 14:09:28.622988295 +0000 UTC m=+1579.324336786" observedRunningTime="2026-01-30 14:09:30.352647306 +0000 UTC m=+1581.053995807" watchObservedRunningTime="2026-01-30 14:09:30.358781906 +0000 UTC m=+1581.060130397" Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.381465 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.289222252 podStartE2EDuration="7.381447004s" podCreationTimestamp="2026-01-30 14:09:23 +0000 UTC" firstStartedPulling="2026-01-30 14:09:24.514608891 +0000 UTC m=+1575.215957382" lastFinishedPulling="2026-01-30 14:09:28.606833643 +0000 UTC m=+1579.308182134" observedRunningTime="2026-01-30 14:09:30.374791023 +0000 UTC m=+1581.076139514" watchObservedRunningTime="2026-01-30 14:09:30.381447004 +0000 UTC m=+1581.082795495" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.340410 4793 generic.go:334] "Generic (PLEG): container finished" podID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerID="9f6ee31d211e47671b169133a4e2a9a54ed40bd52183b29bfffe92ebc8f125fa" exitCode=0 Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.340443 4793 generic.go:334] "Generic (PLEG): container finished" podID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerID="84709903c10f8750c54fa7831d7f3c2e5b04ef1090b9b22520f4fc7ef4db1065" exitCode=143 Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.340505 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e4e3a7e-0fdd-4b58-956c-968b50689ce5","Type":"ContainerDied","Data":"9f6ee31d211e47671b169133a4e2a9a54ed40bd52183b29bfffe92ebc8f125fa"} Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.340540 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e4e3a7e-0fdd-4b58-956c-968b50689ce5","Type":"ContainerDied","Data":"84709903c10f8750c54fa7831d7f3c2e5b04ef1090b9b22520f4fc7ef4db1065"} Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.636034 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.811786 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdgh7\" (UniqueName: \"kubernetes.io/projected/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-kube-api-access-cdgh7\") pod \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.812128 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-config-data\") pod \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.812234 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-combined-ca-bundle\") pod \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.812568 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-logs\") pod \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.813058 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-logs" (OuterVolumeSpecName: "logs") pod "7e4e3a7e-0fdd-4b58-956c-968b50689ce5" (UID: "7e4e3a7e-0fdd-4b58-956c-968b50689ce5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.813657 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.833467 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-kube-api-access-cdgh7" (OuterVolumeSpecName: "kube-api-access-cdgh7") pod "7e4e3a7e-0fdd-4b58-956c-968b50689ce5" (UID: "7e4e3a7e-0fdd-4b58-956c-968b50689ce5"). InnerVolumeSpecName "kube-api-access-cdgh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.855560 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e4e3a7e-0fdd-4b58-956c-968b50689ce5" (UID: "7e4e3a7e-0fdd-4b58-956c-968b50689ce5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.867951 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-config-data" (OuterVolumeSpecName: "config-data") pod "7e4e3a7e-0fdd-4b58-956c-968b50689ce5" (UID: "7e4e3a7e-0fdd-4b58-956c-968b50689ce5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.915897 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdgh7\" (UniqueName: \"kubernetes.io/projected/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-kube-api-access-cdgh7\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.915953 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.915970 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.354806 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e4e3a7e-0fdd-4b58-956c-968b50689ce5","Type":"ContainerDied","Data":"6dbb7f15722c7e00d18758c1026e64f9f4f3aa22d601bb8b93724467cdca1d2e"} Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.354866 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.354883 4793 scope.go:117] "RemoveContainer" containerID="9f6ee31d211e47671b169133a4e2a9a54ed40bd52183b29bfffe92ebc8f125fa" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.397295 4793 scope.go:117] "RemoveContainer" containerID="84709903c10f8750c54fa7831d7f3c2e5b04ef1090b9b22520f4fc7ef4db1065" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.421661 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.421704 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435259 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.435821 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-log" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435838 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-log" Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.435854 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435862 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.435878 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435887 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.435897 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-metadata" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435905 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-metadata" Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.435917 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435924 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.435947 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon-log" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435954 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon-log" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436232 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-log" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436252 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436267 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon-log" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436286 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436301 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436315 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-metadata" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436332 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.436570 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436612 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.437602 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.443582 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.443904 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.468875 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.539287 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.539382 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7r2b\" (UniqueName: \"kubernetes.io/projected/dc77fb59-5785-42af-8629-c3bd9e024983-kube-api-access-b7r2b\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.539417 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.539442 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-config-data\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.539884 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc77fb59-5785-42af-8629-c3bd9e024983-logs\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.642361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.642463 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7r2b\" (UniqueName: \"kubernetes.io/projected/dc77fb59-5785-42af-8629-c3bd9e024983-kube-api-access-b7r2b\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.642514 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.642558 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-config-data\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.642716 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc77fb59-5785-42af-8629-c3bd9e024983-logs\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.643230 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc77fb59-5785-42af-8629-c3bd9e024983-logs\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.647445 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.647515 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-config-data\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.648546 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.667533 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7r2b\" (UniqueName: \"kubernetes.io/projected/dc77fb59-5785-42af-8629-c3bd9e024983-kube-api-access-b7r2b\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.767533 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:33 crc kubenswrapper[4793]: I0130 14:09:33.297655 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:33 crc kubenswrapper[4793]: I0130 14:09:33.367268 4793 generic.go:334] "Generic (PLEG): container finished" podID="ebcc9239-aedb-41d4-bac8-d03c56c76f4a" containerID="c3407efb2fdb58b554465a66ada59f330d66ff60faa105c9e72328442584be37" exitCode=0 Jan 30 14:09:33 crc kubenswrapper[4793]: I0130 14:09:33.367339 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-75k58" event={"ID":"ebcc9239-aedb-41d4-bac8-d03c56c76f4a","Type":"ContainerDied","Data":"c3407efb2fdb58b554465a66ada59f330d66ff60faa105c9e72328442584be37"} Jan 30 14:09:33 crc kubenswrapper[4793]: I0130 14:09:33.373586 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc77fb59-5785-42af-8629-c3bd9e024983","Type":"ContainerStarted","Data":"7c4f0710cca9ec558ca9e50b3847b8e52ce3fc8d37d022a77990843a2d1c1719"} Jan 30 14:09:33 crc kubenswrapper[4793]: I0130 14:09:33.889842 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:09:33 crc kubenswrapper[4793]: I0130 14:09:33.890206 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.107636 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.107966 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.148342 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.196366 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.284685 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.380236 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t5wk9"] Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.384142 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="dnsmasq-dns" containerID="cri-o://b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9" gracePeriod=10 Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.426322 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" path="/var/lib/kubelet/pods/7e4e3a7e-0fdd-4b58-956c-968b50689ce5/volumes" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.450950 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc77fb59-5785-42af-8629-c3bd9e024983","Type":"ContainerStarted","Data":"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d"} Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.453277 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc77fb59-5785-42af-8629-c3bd9e024983","Type":"ContainerStarted","Data":"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b"} Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.476343 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.486697 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: connect: connection refused" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.507752 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.507728432 podStartE2EDuration="2.507728432s" podCreationTimestamp="2026-01-30 14:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:34.446220492 +0000 UTC m=+1585.147568993" watchObservedRunningTime="2026-01-30 14:09:34.507728432 +0000 UTC m=+1585.209076933" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.953291 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:34.998973 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:34.999380 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.088356 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-scripts\") pod \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.088407 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxpm8\" (UniqueName: \"kubernetes.io/projected/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-kube-api-access-fxpm8\") pod \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.088508 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-combined-ca-bundle\") pod \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.088565 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-config-data\") pod \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.107508 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-kube-api-access-fxpm8" (OuterVolumeSpecName: "kube-api-access-fxpm8") pod "ebcc9239-aedb-41d4-bac8-d03c56c76f4a" (UID: "ebcc9239-aedb-41d4-bac8-d03c56c76f4a"). InnerVolumeSpecName "kube-api-access-fxpm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.120990 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-scripts" (OuterVolumeSpecName: "scripts") pod "ebcc9239-aedb-41d4-bac8-d03c56c76f4a" (UID: "ebcc9239-aedb-41d4-bac8-d03c56c76f4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.124390 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebcc9239-aedb-41d4-bac8-d03c56c76f4a" (UID: "ebcc9239-aedb-41d4-bac8-d03c56c76f4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.130363 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-config-data" (OuterVolumeSpecName: "config-data") pod "ebcc9239-aedb-41d4-bac8-d03c56c76f4a" (UID: "ebcc9239-aedb-41d4-bac8-d03c56c76f4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.190963 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.191006 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.191018 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.191032 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxpm8\" (UniqueName: \"kubernetes.io/projected/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-kube-api-access-fxpm8\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.256588 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.393849 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-swift-storage-0\") pod \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.394198 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzc2t\" (UniqueName: \"kubernetes.io/projected/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-kube-api-access-lzc2t\") pod \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.394228 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-sb\") pod \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.394315 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-nb\") pod \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.394417 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-config\") pod \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.394735 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-svc\") pod \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.398843 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-kube-api-access-lzc2t" (OuterVolumeSpecName: "kube-api-access-lzc2t") pod "bbe3cabf-7884-41df-adac-ad1bf7e76bf9" (UID: "bbe3cabf-7884-41df-adac-ad1bf7e76bf9"). InnerVolumeSpecName "kube-api-access-lzc2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.427673 4793 generic.go:334] "Generic (PLEG): container finished" podID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerID="b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9" exitCode=0 Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.427751 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" event={"ID":"bbe3cabf-7884-41df-adac-ad1bf7e76bf9","Type":"ContainerDied","Data":"b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9"} Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.427778 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" event={"ID":"bbe3cabf-7884-41df-adac-ad1bf7e76bf9","Type":"ContainerDied","Data":"067cddf5e14c681c5ac59422d446368a0d6a95f771b27ce5c72d8b49b5b509a7"} Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.427821 4793 scope.go:117] "RemoveContainer" containerID="b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.427991 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.453681 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.457237 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-75k58" event={"ID":"ebcc9239-aedb-41d4-bac8-d03c56c76f4a","Type":"ContainerDied","Data":"b0dc24251680382ac5368495457f086b3ed5dd146adcca5ddd5d5c1ebfc039cc"} Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.457296 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0dc24251680382ac5368495457f086b3ed5dd146adcca5ddd5d5c1ebfc039cc" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.461180 4793 scope.go:117] "RemoveContainer" containerID="b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.480483 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bbe3cabf-7884-41df-adac-ad1bf7e76bf9" (UID: "bbe3cabf-7884-41df-adac-ad1bf7e76bf9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.483662 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-config" (OuterVolumeSpecName: "config") pod "bbe3cabf-7884-41df-adac-ad1bf7e76bf9" (UID: "bbe3cabf-7884-41df-adac-ad1bf7e76bf9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.492840 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bbe3cabf-7884-41df-adac-ad1bf7e76bf9" (UID: "bbe3cabf-7884-41df-adac-ad1bf7e76bf9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.497379 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzc2t\" (UniqueName: \"kubernetes.io/projected/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-kube-api-access-lzc2t\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.497406 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.497415 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.497425 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.511858 4793 scope.go:117] "RemoveContainer" containerID="b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9" Jan 30 14:09:35 crc kubenswrapper[4793]: E0130 14:09:35.512275 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9\": container with ID starting with b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9 not found: ID does not exist" containerID="b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.512306 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9"} err="failed to get container status \"b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9\": rpc error: code = NotFound desc = could not find container \"b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9\": container with ID starting with b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9 not found: ID does not exist" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.512331 4793 scope.go:117] "RemoveContainer" containerID="b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74" Jan 30 14:09:35 crc kubenswrapper[4793]: E0130 14:09:35.512587 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74\": container with ID starting with b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74 not found: ID does not exist" containerID="b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.512611 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74"} err="failed to get container status \"b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74\": rpc error: code = NotFound desc = could not find container \"b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74\": container with ID starting with b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74 not found: ID does not exist" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.568289 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bbe3cabf-7884-41df-adac-ad1bf7e76bf9" (UID: "bbe3cabf-7884-41df-adac-ad1bf7e76bf9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.570193 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bbe3cabf-7884-41df-adac-ad1bf7e76bf9" (UID: "bbe3cabf-7884-41df-adac-ad1bf7e76bf9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.583986 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.584209 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-log" containerID="cri-o://3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3" gracePeriod=30 Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.584701 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-api" containerID="cri-o://b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003" gracePeriod=30 Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.601470 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.603437 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.667660 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.719509 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:35 crc kubenswrapper[4793]: E0130 14:09:35.759377 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebcc9239_aedb_41d4_bac8_d03c56c76f4a.slice/crio-b0dc24251680382ac5368495457f086b3ed5dd146adcca5ddd5d5c1ebfc039cc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0bc7ab8_eaab_4f40_888a_e31e12e7e773.slice/crio-3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebcc9239_aedb_41d4_bac8_d03c56c76f4a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0bc7ab8_eaab_4f40_888a_e31e12e7e773.slice/crio-conmon-3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3.scope\": RecentStats: unable to find data in memory cache]" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.766489 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t5wk9"] Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.774748 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t5wk9"] Jan 30 14:09:36 crc kubenswrapper[4793]: I0130 14:09:36.409772 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" path="/var/lib/kubelet/pods/bbe3cabf-7884-41df-adac-ad1bf7e76bf9/volumes" Jan 30 14:09:36 crc kubenswrapper[4793]: I0130 14:09:36.463378 4793 generic.go:334] "Generic (PLEG): container finished" podID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerID="3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3" exitCode=143 Jan 30 14:09:36 crc kubenswrapper[4793]: I0130 14:09:36.463564 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-log" containerID="cri-o://f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b" gracePeriod=30 Jan 30 14:09:36 crc kubenswrapper[4793]: I0130 14:09:36.463793 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0bc7ab8-eaab-4f40-888a-e31e12e7e773","Type":"ContainerDied","Data":"3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3"} Jan 30 14:09:36 crc kubenswrapper[4793]: I0130 14:09:36.463891 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ea153b39-273a-489d-8964-8cfddfc788e1" containerName="nova-scheduler-scheduler" containerID="cri-o://aedab7e636cfadaa8cce12328c9b2c0d0677045f1058517845d9c2fc6e4ef3ee" gracePeriod=30 Jan 30 14:09:36 crc kubenswrapper[4793]: I0130 14:09:36.464128 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-metadata" containerID="cri-o://02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d" gracePeriod=30 Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.032676 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.134808 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7r2b\" (UniqueName: \"kubernetes.io/projected/dc77fb59-5785-42af-8629-c3bd9e024983-kube-api-access-b7r2b\") pod \"dc77fb59-5785-42af-8629-c3bd9e024983\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.134964 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-nova-metadata-tls-certs\") pod \"dc77fb59-5785-42af-8629-c3bd9e024983\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.135158 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-combined-ca-bundle\") pod \"dc77fb59-5785-42af-8629-c3bd9e024983\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.135199 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-config-data\") pod \"dc77fb59-5785-42af-8629-c3bd9e024983\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.135805 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc77fb59-5785-42af-8629-c3bd9e024983-logs\") pod \"dc77fb59-5785-42af-8629-c3bd9e024983\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.136098 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc77fb59-5785-42af-8629-c3bd9e024983-logs" (OuterVolumeSpecName: "logs") pod "dc77fb59-5785-42af-8629-c3bd9e024983" (UID: "dc77fb59-5785-42af-8629-c3bd9e024983"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.136534 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc77fb59-5785-42af-8629-c3bd9e024983-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.140622 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc77fb59-5785-42af-8629-c3bd9e024983-kube-api-access-b7r2b" (OuterVolumeSpecName: "kube-api-access-b7r2b") pod "dc77fb59-5785-42af-8629-c3bd9e024983" (UID: "dc77fb59-5785-42af-8629-c3bd9e024983"). InnerVolumeSpecName "kube-api-access-b7r2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.161708 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc77fb59-5785-42af-8629-c3bd9e024983" (UID: "dc77fb59-5785-42af-8629-c3bd9e024983"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.177003 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-config-data" (OuterVolumeSpecName: "config-data") pod "dc77fb59-5785-42af-8629-c3bd9e024983" (UID: "dc77fb59-5785-42af-8629-c3bd9e024983"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.204287 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "dc77fb59-5785-42af-8629-c3bd9e024983" (UID: "dc77fb59-5785-42af-8629-c3bd9e024983"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.238075 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7r2b\" (UniqueName: \"kubernetes.io/projected/dc77fb59-5785-42af-8629-c3bd9e024983-kube-api-access-b7r2b\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.238110 4793 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.238120 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.238129 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.475374 4793 generic.go:334] "Generic (PLEG): container finished" podID="45bc0c92-8817-447f-a591-d593d49d1b22" containerID="d5dca6794b88409e9b00ca4874a836a8fc72adc63350f5d3d74d780410a0a920" exitCode=0 Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.476720 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" event={"ID":"45bc0c92-8817-447f-a591-d593d49d1b22","Type":"ContainerDied","Data":"d5dca6794b88409e9b00ca4874a836a8fc72adc63350f5d3d74d780410a0a920"} Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488172 4793 generic.go:334] "Generic (PLEG): container finished" podID="dc77fb59-5785-42af-8629-c3bd9e024983" containerID="02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d" exitCode=0 Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488201 4793 generic.go:334] "Generic (PLEG): container finished" podID="dc77fb59-5785-42af-8629-c3bd9e024983" containerID="f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b" exitCode=143 Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488236 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc77fb59-5785-42af-8629-c3bd9e024983","Type":"ContainerDied","Data":"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d"} Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc77fb59-5785-42af-8629-c3bd9e024983","Type":"ContainerDied","Data":"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b"} Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488485 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc77fb59-5785-42af-8629-c3bd9e024983","Type":"ContainerDied","Data":"7c4f0710cca9ec558ca9e50b3847b8e52ce3fc8d37d022a77990843a2d1c1719"} Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488500 4793 scope.go:117] "RemoveContainer" containerID="02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488784 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.510359 4793 generic.go:334] "Generic (PLEG): container finished" podID="ea153b39-273a-489d-8964-8cfddfc788e1" containerID="aedab7e636cfadaa8cce12328c9b2c0d0677045f1058517845d9c2fc6e4ef3ee" exitCode=0 Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.510401 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ea153b39-273a-489d-8964-8cfddfc788e1","Type":"ContainerDied","Data":"aedab7e636cfadaa8cce12328c9b2c0d0677045f1058517845d9c2fc6e4ef3ee"} Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.554011 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.562428 4793 scope.go:117] "RemoveContainer" containerID="f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.565583 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575102 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.575495 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="init" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575508 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="init" Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.575521 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-metadata" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575528 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-metadata" Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.575549 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="dnsmasq-dns" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575556 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="dnsmasq-dns" Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.575578 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcc9239-aedb-41d4-bac8-d03c56c76f4a" containerName="nova-manage" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575583 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcc9239-aedb-41d4-bac8-d03c56c76f4a" containerName="nova-manage" Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.575590 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-log" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575597 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-log" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575755 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-log" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575770 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="dnsmasq-dns" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575783 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-metadata" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575793 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcc9239-aedb-41d4-bac8-d03c56c76f4a" containerName="nova-manage" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.577106 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.582764 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.582764 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.596376 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.611409 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.628363 4793 scope.go:117] "RemoveContainer" containerID="02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d" Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.628780 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d\": container with ID starting with 02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d not found: ID does not exist" containerID="02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.628818 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d"} err="failed to get container status \"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d\": rpc error: code = NotFound desc = could not find container \"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d\": container with ID starting with 02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d not found: ID does not exist" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.628838 4793 scope.go:117] "RemoveContainer" containerID="f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b" Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.629187 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b\": container with ID starting with f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b not found: ID does not exist" containerID="f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.629225 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b"} err="failed to get container status \"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b\": rpc error: code = NotFound desc = could not find container \"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b\": container with ID starting with f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b not found: ID does not exist" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.629240 4793 scope.go:117] "RemoveContainer" containerID="02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.629466 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d"} err="failed to get container status \"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d\": rpc error: code = NotFound desc = could not find container \"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d\": container with ID starting with 02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d not found: ID does not exist" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.629481 4793 scope.go:117] "RemoveContainer" containerID="f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.629718 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b"} err="failed to get container status \"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b\": rpc error: code = NotFound desc = could not find container \"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b\": container with ID starting with f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b not found: ID does not exist" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.650284 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.650327 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.650385 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ed6c75-bf0d-4f2f-a470-42fd54e304da-logs\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.650465 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-config-data\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.650506 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kzp9\" (UniqueName: \"kubernetes.io/projected/49ed6c75-bf0d-4f2f-a470-42fd54e304da-kube-api-access-7kzp9\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.752914 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-config-data\") pod \"ea153b39-273a-489d-8964-8cfddfc788e1\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.753294 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjzmv\" (UniqueName: \"kubernetes.io/projected/ea153b39-273a-489d-8964-8cfddfc788e1-kube-api-access-hjzmv\") pod \"ea153b39-273a-489d-8964-8cfddfc788e1\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.753416 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-combined-ca-bundle\") pod \"ea153b39-273a-489d-8964-8cfddfc788e1\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.753831 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ed6c75-bf0d-4f2f-a470-42fd54e304da-logs\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.754034 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-config-data\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.754193 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kzp9\" (UniqueName: \"kubernetes.io/projected/49ed6c75-bf0d-4f2f-a470-42fd54e304da-kube-api-access-7kzp9\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.754332 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.754475 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.756631 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ed6c75-bf0d-4f2f-a470-42fd54e304da-logs\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.758604 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-config-data\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.758810 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea153b39-273a-489d-8964-8cfddfc788e1-kube-api-access-hjzmv" (OuterVolumeSpecName: "kube-api-access-hjzmv") pod "ea153b39-273a-489d-8964-8cfddfc788e1" (UID: "ea153b39-273a-489d-8964-8cfddfc788e1"). InnerVolumeSpecName "kube-api-access-hjzmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.760660 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.763234 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.776780 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kzp9\" (UniqueName: \"kubernetes.io/projected/49ed6c75-bf0d-4f2f-a470-42fd54e304da-kube-api-access-7kzp9\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.787525 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-config-data" (OuterVolumeSpecName: "config-data") pod "ea153b39-273a-489d-8964-8cfddfc788e1" (UID: "ea153b39-273a-489d-8964-8cfddfc788e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.802451 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea153b39-273a-489d-8964-8cfddfc788e1" (UID: "ea153b39-273a-489d-8964-8cfddfc788e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.856022 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.856070 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjzmv\" (UniqueName: \"kubernetes.io/projected/ea153b39-273a-489d-8964-8cfddfc788e1-kube-api-access-hjzmv\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.856081 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.929983 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.375579 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:38 crc kubenswrapper[4793]: W0130 14:09:38.382468 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49ed6c75_bf0d_4f2f_a470_42fd54e304da.slice/crio-8e827d18d94a36e1032ee13a7b09882361977c3cc27e172ae22dfb68a0554721 WatchSource:0}: Error finding container 8e827d18d94a36e1032ee13a7b09882361977c3cc27e172ae22dfb68a0554721: Status 404 returned error can't find the container with id 8e827d18d94a36e1032ee13a7b09882361977c3cc27e172ae22dfb68a0554721 Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.412755 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" path="/var/lib/kubelet/pods/dc77fb59-5785-42af-8629-c3bd9e024983/volumes" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.526503 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ea153b39-273a-489d-8964-8cfddfc788e1","Type":"ContainerDied","Data":"b79e4c94f61e795e4871651dd3246ac5673f935fef8bbf454e20718af00efe9b"} Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.526832 4793 scope.go:117] "RemoveContainer" containerID="aedab7e636cfadaa8cce12328c9b2c0d0677045f1058517845d9c2fc6e4ef3ee" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.526768 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.535870 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49ed6c75-bf0d-4f2f-a470-42fd54e304da","Type":"ContainerStarted","Data":"8e827d18d94a36e1032ee13a7b09882361977c3cc27e172ae22dfb68a0554721"} Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.552673 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.559406 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.582616 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:38 crc kubenswrapper[4793]: E0130 14:09:38.583001 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea153b39-273a-489d-8964-8cfddfc788e1" containerName="nova-scheduler-scheduler" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.583018 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea153b39-273a-489d-8964-8cfddfc788e1" containerName="nova-scheduler-scheduler" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.583230 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea153b39-273a-489d-8964-8cfddfc788e1" containerName="nova-scheduler-scheduler" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.583804 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.587206 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.615710 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.680956 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7x6x\" (UniqueName: \"kubernetes.io/projected/b0772278-2936-43a7-b8e8-255d72a26a46-kube-api-access-r7x6x\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.681030 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-config-data\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.681087 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.783081 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7x6x\" (UniqueName: \"kubernetes.io/projected/b0772278-2936-43a7-b8e8-255d72a26a46-kube-api-access-r7x6x\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.783164 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-config-data\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.783221 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.790912 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.791365 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-config-data\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.803195 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7x6x\" (UniqueName: \"kubernetes.io/projected/b0772278-2936-43a7-b8e8-255d72a26a46-kube-api-access-r7x6x\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.864760 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.907562 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.987402 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-config-data\") pod \"45bc0c92-8817-447f-a591-d593d49d1b22\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.987548 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvphb\" (UniqueName: \"kubernetes.io/projected/45bc0c92-8817-447f-a591-d593d49d1b22-kube-api-access-pvphb\") pod \"45bc0c92-8817-447f-a591-d593d49d1b22\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.987665 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-combined-ca-bundle\") pod \"45bc0c92-8817-447f-a591-d593d49d1b22\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.987754 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-scripts\") pod \"45bc0c92-8817-447f-a591-d593d49d1b22\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.992713 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-scripts" (OuterVolumeSpecName: "scripts") pod "45bc0c92-8817-447f-a591-d593d49d1b22" (UID: "45bc0c92-8817-447f-a591-d593d49d1b22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.998946 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45bc0c92-8817-447f-a591-d593d49d1b22-kube-api-access-pvphb" (OuterVolumeSpecName: "kube-api-access-pvphb") pod "45bc0c92-8817-447f-a591-d593d49d1b22" (UID: "45bc0c92-8817-447f-a591-d593d49d1b22"). InnerVolumeSpecName "kube-api-access-pvphb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.039655 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-config-data" (OuterVolumeSpecName: "config-data") pod "45bc0c92-8817-447f-a591-d593d49d1b22" (UID: "45bc0c92-8817-447f-a591-d593d49d1b22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.061116 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45bc0c92-8817-447f-a591-d593d49d1b22" (UID: "45bc0c92-8817-447f-a591-d593d49d1b22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.091203 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.091236 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvphb\" (UniqueName: \"kubernetes.io/projected/45bc0c92-8817-447f-a591-d593d49d1b22-kube-api-access-pvphb\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.091245 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.091252 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.381737 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.550023 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0772278-2936-43a7-b8e8-255d72a26a46","Type":"ContainerStarted","Data":"0c43fd7a19c8e62a860f534d7237c66cb3f8e183b6b7d0b236a6b8cd04692810"} Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.555739 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" event={"ID":"45bc0c92-8817-447f-a591-d593d49d1b22","Type":"ContainerDied","Data":"a679a51b0c6e6137e2ec5414eeb13b529804081ead1233b8cc65b0c2cf5027d0"} Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.555784 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a679a51b0c6e6137e2ec5414eeb13b529804081ead1233b8cc65b0c2cf5027d0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.555850 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.568903 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49ed6c75-bf0d-4f2f-a470-42fd54e304da","Type":"ContainerStarted","Data":"cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f"} Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.568953 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49ed6c75-bf0d-4f2f-a470-42fd54e304da","Type":"ContainerStarted","Data":"08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04"} Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.600775 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 14:09:39 crc kubenswrapper[4793]: E0130 14:09:39.601120 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45bc0c92-8817-447f-a591-d593d49d1b22" containerName="nova-cell1-conductor-db-sync" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.601131 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bc0c92-8817-447f-a591-d593d49d1b22" containerName="nova-cell1-conductor-db-sync" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.601430 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="45bc0c92-8817-447f-a591-d593d49d1b22" containerName="nova-cell1-conductor-db-sync" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.623769 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.632148 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.679196 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.694833 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.694811732 podStartE2EDuration="2.694811732s" podCreationTimestamp="2026-01-30 14:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:39.600502417 +0000 UTC m=+1590.301850908" watchObservedRunningTime="2026-01-30 14:09:39.694811732 +0000 UTC m=+1590.396160233" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.703518 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2acd609-26c0-4b98-861f-a8b12fcd07bf-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.703596 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2acd609-26c0-4b98-861f-a8b12fcd07bf-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.703762 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgq7n\" (UniqueName: \"kubernetes.io/projected/d2acd609-26c0-4b98-861f-a8b12fcd07bf-kube-api-access-xgq7n\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.805770 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2acd609-26c0-4b98-861f-a8b12fcd07bf-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.805827 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2acd609-26c0-4b98-861f-a8b12fcd07bf-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.805886 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgq7n\" (UniqueName: \"kubernetes.io/projected/d2acd609-26c0-4b98-861f-a8b12fcd07bf-kube-api-access-xgq7n\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.811284 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2acd609-26c0-4b98-861f-a8b12fcd07bf-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.812426 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2acd609-26c0-4b98-861f-a8b12fcd07bf-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.821154 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgq7n\" (UniqueName: \"kubernetes.io/projected/d2acd609-26c0-4b98-861f-a8b12fcd07bf-kube-api-access-xgq7n\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.948699 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:40 crc kubenswrapper[4793]: I0130 14:09:40.411459 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea153b39-273a-489d-8964-8cfddfc788e1" path="/var/lib/kubelet/pods/ea153b39-273a-489d-8964-8cfddfc788e1/volumes" Jan 30 14:09:40 crc kubenswrapper[4793]: W0130 14:09:40.431378 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2acd609_26c0_4b98_861f_a8b12fcd07bf.slice/crio-0d7f6a07316a9fbb9980900056b3c6a5a645157b8a92893dec47572b136c5bc0 WatchSource:0}: Error finding container 0d7f6a07316a9fbb9980900056b3c6a5a645157b8a92893dec47572b136c5bc0: Status 404 returned error can't find the container with id 0d7f6a07316a9fbb9980900056b3c6a5a645157b8a92893dec47572b136c5bc0 Jan 30 14:09:40 crc kubenswrapper[4793]: I0130 14:09:40.432857 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 14:09:40 crc kubenswrapper[4793]: I0130 14:09:40.578784 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d2acd609-26c0-4b98-861f-a8b12fcd07bf","Type":"ContainerStarted","Data":"0d7f6a07316a9fbb9980900056b3c6a5a645157b8a92893dec47572b136c5bc0"} Jan 30 14:09:40 crc kubenswrapper[4793]: I0130 14:09:40.581155 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0772278-2936-43a7-b8e8-255d72a26a46","Type":"ContainerStarted","Data":"fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a"} Jan 30 14:09:40 crc kubenswrapper[4793]: I0130 14:09:40.602709 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.60268697 podStartE2EDuration="2.60268697s" podCreationTimestamp="2026-01-30 14:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:40.599028511 +0000 UTC m=+1591.300377012" watchObservedRunningTime="2026-01-30 14:09:40.60268697 +0000 UTC m=+1591.304035461" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.403829 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.574792 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.621945 4793 generic.go:334] "Generic (PLEG): container finished" podID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerID="b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003" exitCode=0 Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.622007 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0bc7ab8-eaab-4f40-888a-e31e12e7e773","Type":"ContainerDied","Data":"b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003"} Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.622268 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0bc7ab8-eaab-4f40-888a-e31e12e7e773","Type":"ContainerDied","Data":"0f7b1e63c6586afd494bffb3cd6108f0bd39ae0f843d930d8e6a29831d4dc1ca"} Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.622017 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.622285 4793 scope.go:117] "RemoveContainer" containerID="b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.642726 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4h9t\" (UniqueName: \"kubernetes.io/projected/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-kube-api-access-t4h9t\") pod \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.642891 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-logs\") pod \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.643016 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-combined-ca-bundle\") pod \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.643162 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-config-data\") pod \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.643514 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-logs" (OuterVolumeSpecName: "logs") pod "c0bc7ab8-eaab-4f40-888a-e31e12e7e773" (UID: "c0bc7ab8-eaab-4f40-888a-e31e12e7e773"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.643693 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.645293 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d2acd609-26c0-4b98-861f-a8b12fcd07bf","Type":"ContainerStarted","Data":"fae27845939cb8c0afbf747f63b3a1a8d4c95dac8d7eb0b4f48c1fa2352a21a3"} Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.645332 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.654577 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-kube-api-access-t4h9t" (OuterVolumeSpecName: "kube-api-access-t4h9t") pod "c0bc7ab8-eaab-4f40-888a-e31e12e7e773" (UID: "c0bc7ab8-eaab-4f40-888a-e31e12e7e773"). InnerVolumeSpecName "kube-api-access-t4h9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.668219 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.668198006 podStartE2EDuration="2.668198006s" podCreationTimestamp="2026-01-30 14:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:41.667658564 +0000 UTC m=+1592.369007065" watchObservedRunningTime="2026-01-30 14:09:41.668198006 +0000 UTC m=+1592.369546497" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.678435 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-config-data" (OuterVolumeSpecName: "config-data") pod "c0bc7ab8-eaab-4f40-888a-e31e12e7e773" (UID: "c0bc7ab8-eaab-4f40-888a-e31e12e7e773"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.688778 4793 scope.go:117] "RemoveContainer" containerID="3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.714552 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0bc7ab8-eaab-4f40-888a-e31e12e7e773" (UID: "c0bc7ab8-eaab-4f40-888a-e31e12e7e773"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.746817 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.746867 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.746880 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4h9t\" (UniqueName: \"kubernetes.io/projected/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-kube-api-access-t4h9t\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.785535 4793 scope.go:117] "RemoveContainer" containerID="b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003" Jan 30 14:09:41 crc kubenswrapper[4793]: E0130 14:09:41.785989 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003\": container with ID starting with b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003 not found: ID does not exist" containerID="b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.786023 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003"} err="failed to get container status \"b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003\": rpc error: code = NotFound desc = could not find container \"b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003\": container with ID starting with b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003 not found: ID does not exist" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.786055 4793 scope.go:117] "RemoveContainer" containerID="3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3" Jan 30 14:09:41 crc kubenswrapper[4793]: E0130 14:09:41.786410 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3\": container with ID starting with 3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3 not found: ID does not exist" containerID="3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.786437 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3"} err="failed to get container status \"3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3\": rpc error: code = NotFound desc = could not find container \"3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3\": container with ID starting with 3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3 not found: ID does not exist" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.971092 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.984203 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.002064 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:42 crc kubenswrapper[4793]: E0130 14:09:42.002835 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-log" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.002867 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-log" Jan 30 14:09:42 crc kubenswrapper[4793]: E0130 14:09:42.002890 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-api" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.002899 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-api" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.003212 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-log" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.003250 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-api" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.004786 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.008319 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.027929 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.159320 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192f1855-5895-4928-ad91-e3bded531967-logs\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.159386 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.159462 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbbdq\" (UniqueName: \"kubernetes.io/projected/192f1855-5895-4928-ad91-e3bded531967-kube-api-access-vbbdq\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.159617 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-config-data\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.261445 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbbdq\" (UniqueName: \"kubernetes.io/projected/192f1855-5895-4928-ad91-e3bded531967-kube-api-access-vbbdq\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.261560 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-config-data\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.261659 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192f1855-5895-4928-ad91-e3bded531967-logs\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.261685 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.262695 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192f1855-5895-4928-ad91-e3bded531967-logs\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.271855 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-config-data\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.272653 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.286375 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbbdq\" (UniqueName: \"kubernetes.io/projected/192f1855-5895-4928-ad91-e3bded531967-kube-api-access-vbbdq\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.322790 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.408845 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" path="/var/lib/kubelet/pods/c0bc7ab8-eaab-4f40-888a-e31e12e7e773/volumes" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.419309 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.419634 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:09:42 crc kubenswrapper[4793]: W0130 14:09:42.783647 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod192f1855_5895_4928_ad91_e3bded531967.slice/crio-a60f0efe8fb07eeb18fb57a1f165913b971130ef8a6693c2bd5863d0b6756b90 WatchSource:0}: Error finding container a60f0efe8fb07eeb18fb57a1f165913b971130ef8a6693c2bd5863d0b6756b90: Status 404 returned error can't find the container with id a60f0efe8fb07eeb18fb57a1f165913b971130ef8a6693c2bd5863d0b6756b90 Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.787580 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.931143 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.931381 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 14:09:43 crc kubenswrapper[4793]: I0130 14:09:43.663520 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192f1855-5895-4928-ad91-e3bded531967","Type":"ContainerStarted","Data":"a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d"} Jan 30 14:09:43 crc kubenswrapper[4793]: I0130 14:09:43.663562 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192f1855-5895-4928-ad91-e3bded531967","Type":"ContainerStarted","Data":"dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f"} Jan 30 14:09:43 crc kubenswrapper[4793]: I0130 14:09:43.663572 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192f1855-5895-4928-ad91-e3bded531967","Type":"ContainerStarted","Data":"a60f0efe8fb07eeb18fb57a1f165913b971130ef8a6693c2bd5863d0b6756b90"} Jan 30 14:09:43 crc kubenswrapper[4793]: I0130 14:09:43.688152 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.688129779 podStartE2EDuration="2.688129779s" podCreationTimestamp="2026-01-30 14:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:43.678874764 +0000 UTC m=+1594.380223255" watchObservedRunningTime="2026-01-30 14:09:43.688129779 +0000 UTC m=+1594.389478280" Jan 30 14:09:43 crc kubenswrapper[4793]: I0130 14:09:43.907918 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 14:09:47 crc kubenswrapper[4793]: I0130 14:09:47.930758 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 14:09:47 crc kubenswrapper[4793]: I0130 14:09:47.931170 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 14:09:48 crc kubenswrapper[4793]: I0130 14:09:48.907830 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 14:09:48 crc kubenswrapper[4793]: I0130 14:09:48.934176 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 14:09:48 crc kubenswrapper[4793]: I0130 14:09:48.943359 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:09:48 crc kubenswrapper[4793]: I0130 14:09:48.943425 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:09:49 crc kubenswrapper[4793]: I0130 14:09:49.765174 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 14:09:49 crc kubenswrapper[4793]: I0130 14:09:49.980652 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:52 crc kubenswrapper[4793]: I0130 14:09:52.324126 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:09:52 crc kubenswrapper[4793]: I0130 14:09:52.324492 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:09:53 crc kubenswrapper[4793]: I0130 14:09:53.406329 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:09:53 crc kubenswrapper[4793]: I0130 14:09:53.406454 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:09:57 crc kubenswrapper[4793]: I0130 14:09:57.937641 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 14:09:57 crc kubenswrapper[4793]: I0130 14:09:57.939593 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 14:09:57 crc kubenswrapper[4793]: I0130 14:09:57.946649 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 14:09:57 crc kubenswrapper[4793]: I0130 14:09:57.947529 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.663512 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.798767 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m4td\" (UniqueName: \"kubernetes.io/projected/946dbfc0-785c-4159-af93-83c11dd8d7e1-kube-api-access-8m4td\") pod \"946dbfc0-785c-4159-af93-83c11dd8d7e1\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.798951 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-combined-ca-bundle\") pod \"946dbfc0-785c-4159-af93-83c11dd8d7e1\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.799020 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-config-data\") pod \"946dbfc0-785c-4159-af93-83c11dd8d7e1\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.805183 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946dbfc0-785c-4159-af93-83c11dd8d7e1-kube-api-access-8m4td" (OuterVolumeSpecName: "kube-api-access-8m4td") pod "946dbfc0-785c-4159-af93-83c11dd8d7e1" (UID: "946dbfc0-785c-4159-af93-83c11dd8d7e1"). InnerVolumeSpecName "kube-api-access-8m4td". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.826136 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "946dbfc0-785c-4159-af93-83c11dd8d7e1" (UID: "946dbfc0-785c-4159-af93-83c11dd8d7e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.828430 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-config-data" (OuterVolumeSpecName: "config-data") pod "946dbfc0-785c-4159-af93-83c11dd8d7e1" (UID: "946dbfc0-785c-4159-af93-83c11dd8d7e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.868459 4793 generic.go:334] "Generic (PLEG): container finished" podID="946dbfc0-785c-4159-af93-83c11dd8d7e1" containerID="32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d" exitCode=137 Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.868506 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"946dbfc0-785c-4159-af93-83c11dd8d7e1","Type":"ContainerDied","Data":"32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d"} Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.868532 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"946dbfc0-785c-4159-af93-83c11dd8d7e1","Type":"ContainerDied","Data":"1fefeee02348cae466643167ff300193a0079c4a4093e5a2e4f25f3447fef7bf"} Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.868551 4793 scope.go:117] "RemoveContainer" containerID="32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.868553 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.901489 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.901523 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m4td\" (UniqueName: \"kubernetes.io/projected/946dbfc0-785c-4159-af93-83c11dd8d7e1-kube-api-access-8m4td\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.901538 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.915336 4793 scope.go:117] "RemoveContainer" containerID="32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d" Jan 30 14:10:00 crc kubenswrapper[4793]: E0130 14:10:00.918974 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d\": container with ID starting with 32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d not found: ID does not exist" containerID="32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.919249 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d"} err="failed to get container status \"32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d\": rpc error: code = NotFound desc = could not find container \"32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d\": container with ID starting with 32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d not found: ID does not exist" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.938379 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.949116 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.962753 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:10:00 crc kubenswrapper[4793]: E0130 14:10:00.963279 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946dbfc0-785c-4159-af93-83c11dd8d7e1" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.963299 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="946dbfc0-785c-4159-af93-83c11dd8d7e1" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.963530 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="946dbfc0-785c-4159-af93-83c11dd8d7e1" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.964336 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.967310 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.967562 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.967677 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.976563 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.105028 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.105689 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.105874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.105988 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4vpw\" (UniqueName: \"kubernetes.io/projected/abaabb74-42dd-40b6-9cb7-69db46f235df-kube-api-access-j4vpw\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.106147 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.207798 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.207883 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.207907 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4vpw\" (UniqueName: \"kubernetes.io/projected/abaabb74-42dd-40b6-9cb7-69db46f235df-kube-api-access-j4vpw\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.207939 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.207970 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.212666 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.213913 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.215821 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.217684 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.226237 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4vpw\" (UniqueName: \"kubernetes.io/projected/abaabb74-42dd-40b6-9cb7-69db46f235df-kube-api-access-j4vpw\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.291877 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.758688 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:10:01 crc kubenswrapper[4793]: W0130 14:10:01.767945 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabaabb74_42dd_40b6_9cb7_69db46f235df.slice/crio-bd4aab829a3ce19952c98fef567ee92cfcfb12d99da0c93df580109c0bd9995d WatchSource:0}: Error finding container bd4aab829a3ce19952c98fef567ee92cfcfb12d99da0c93df580109c0bd9995d: Status 404 returned error can't find the container with id bd4aab829a3ce19952c98fef567ee92cfcfb12d99da0c93df580109c0bd9995d Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.882431 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"abaabb74-42dd-40b6-9cb7-69db46f235df","Type":"ContainerStarted","Data":"bd4aab829a3ce19952c98fef567ee92cfcfb12d99da0c93df580109c0bd9995d"} Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.330223 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.331064 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.331672 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.334822 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.428938 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946dbfc0-785c-4159-af93-83c11dd8d7e1" path="/var/lib/kubelet/pods/946dbfc0-785c-4159-af93-83c11dd8d7e1/volumes" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.900329 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"abaabb74-42dd-40b6-9cb7-69db46f235df","Type":"ContainerStarted","Data":"96d21d4383f42ab4e78d9f1eb561cbc4de823973cf57bcc4f3433a0cf8728d8b"} Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.901216 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.903545 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.928376 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.928357944 podStartE2EDuration="2.928357944s" podCreationTimestamp="2026-01-30 14:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:02.917228824 +0000 UTC m=+1613.618577315" watchObservedRunningTime="2026-01-30 14:10:02.928357944 +0000 UTC m=+1613.629706435" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.118391 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-cxkd2"] Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.120195 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.150230 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-cxkd2"] Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.264009 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.264174 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.264300 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wjfh\" (UniqueName: \"kubernetes.io/projected/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-kube-api-access-9wjfh\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.264488 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.264648 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.264696 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-config\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.365709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.365975 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-config\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.366025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.366139 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.366182 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wjfh\" (UniqueName: \"kubernetes.io/projected/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-kube-api-access-9wjfh\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.366240 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.366581 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.366856 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.367293 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.367404 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.367450 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-config\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.388824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wjfh\" (UniqueName: \"kubernetes.io/projected/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-kube-api-access-9wjfh\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.439444 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:04 crc kubenswrapper[4793]: I0130 14:10:04.571548 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-cxkd2"] Jan 30 14:10:04 crc kubenswrapper[4793]: I0130 14:10:04.919458 4793 generic.go:334] "Generic (PLEG): container finished" podID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerID="0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889" exitCode=0 Jan 30 14:10:04 crc kubenswrapper[4793]: I0130 14:10:04.919563 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" event={"ID":"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1","Type":"ContainerDied","Data":"0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889"} Jan 30 14:10:04 crc kubenswrapper[4793]: I0130 14:10:04.920011 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" event={"ID":"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1","Type":"ContainerStarted","Data":"78fb92af330aba5ae85ee09e8c30d31dd6612ee663286c5bea03ea04be9abef3"} Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.804608 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.948060 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.948562 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-central-agent" containerID="cri-o://14c5e5290d598f46c34890c9a841a85b87492f2237d89b7ffdeee5e8f99bb6c1" gracePeriod=30 Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.948815 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="proxy-httpd" containerID="cri-o://22e3b2b4f8af8c074e2701dd075aff341ca69019ed98db94c94c5c8c8fac5cc3" gracePeriod=30 Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.949070 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-notification-agent" containerID="cri-o://35435e31f9baea1e4c9263c0e0abafdae31a9145d621c42772e5dd4993b88a8f" gracePeriod=30 Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.949507 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="sg-core" containerID="cri-o://9d08b2914bdb19816d93c8f01afbbd1f5c6993dc4e90cc049ba23dc54276f1e5" gracePeriod=30 Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.963312 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" event={"ID":"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1","Type":"ContainerStarted","Data":"1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4"} Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.966015 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-log" containerID="cri-o://dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f" gracePeriod=30 Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.967770 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.967813 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-api" containerID="cri-o://a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d" gracePeriod=30 Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.008583 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" podStartSLOduration=3.008560146 podStartE2EDuration="3.008560146s" podCreationTimestamp="2026-01-30 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:05.989832102 +0000 UTC m=+1616.691180613" watchObservedRunningTime="2026-01-30 14:10:06.008560146 +0000 UTC m=+1616.709908647" Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.292006 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:06 crc kubenswrapper[4793]: E0130 14:10:06.471400 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d6b1bbd_8431_4e0c_882a_6ec9dee336f2.slice/crio-conmon-14c5e5290d598f46c34890c9a841a85b87492f2237d89b7ffdeee5e8f99bb6c1.scope\": RecentStats: unable to find data in memory cache]" Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.981347 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerID="22e3b2b4f8af8c074e2701dd075aff341ca69019ed98db94c94c5c8c8fac5cc3" exitCode=0 Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.981380 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerID="9d08b2914bdb19816d93c8f01afbbd1f5c6993dc4e90cc049ba23dc54276f1e5" exitCode=2 Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.981389 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerID="14c5e5290d598f46c34890c9a841a85b87492f2237d89b7ffdeee5e8f99bb6c1" exitCode=0 Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.981428 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerDied","Data":"22e3b2b4f8af8c074e2701dd075aff341ca69019ed98db94c94c5c8c8fac5cc3"} Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.981454 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerDied","Data":"9d08b2914bdb19816d93c8f01afbbd1f5c6993dc4e90cc049ba23dc54276f1e5"} Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.981463 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerDied","Data":"14c5e5290d598f46c34890c9a841a85b87492f2237d89b7ffdeee5e8f99bb6c1"} Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.983349 4793 generic.go:334] "Generic (PLEG): container finished" podID="192f1855-5895-4928-ad91-e3bded531967" containerID="dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f" exitCode=143 Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.984528 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192f1855-5895-4928-ad91-e3bded531967","Type":"ContainerDied","Data":"dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f"} Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.605632 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.802556 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-config-data\") pod \"192f1855-5895-4928-ad91-e3bded531967\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.802603 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-combined-ca-bundle\") pod \"192f1855-5895-4928-ad91-e3bded531967\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.802679 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbbdq\" (UniqueName: \"kubernetes.io/projected/192f1855-5895-4928-ad91-e3bded531967-kube-api-access-vbbdq\") pod \"192f1855-5895-4928-ad91-e3bded531967\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.802790 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192f1855-5895-4928-ad91-e3bded531967-logs\") pod \"192f1855-5895-4928-ad91-e3bded531967\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.803740 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192f1855-5895-4928-ad91-e3bded531967-logs" (OuterVolumeSpecName: "logs") pod "192f1855-5895-4928-ad91-e3bded531967" (UID: "192f1855-5895-4928-ad91-e3bded531967"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.815489 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192f1855-5895-4928-ad91-e3bded531967-kube-api-access-vbbdq" (OuterVolumeSpecName: "kube-api-access-vbbdq") pod "192f1855-5895-4928-ad91-e3bded531967" (UID: "192f1855-5895-4928-ad91-e3bded531967"). InnerVolumeSpecName "kube-api-access-vbbdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.859489 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "192f1855-5895-4928-ad91-e3bded531967" (UID: "192f1855-5895-4928-ad91-e3bded531967"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.869022 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-config-data" (OuterVolumeSpecName: "config-data") pod "192f1855-5895-4928-ad91-e3bded531967" (UID: "192f1855-5895-4928-ad91-e3bded531967"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.905270 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.905302 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.905314 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbbdq\" (UniqueName: \"kubernetes.io/projected/192f1855-5895-4928-ad91-e3bded531967-kube-api-access-vbbdq\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.905322 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192f1855-5895-4928-ad91-e3bded531967-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.024354 4793 generic.go:334] "Generic (PLEG): container finished" podID="192f1855-5895-4928-ad91-e3bded531967" containerID="a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d" exitCode=0 Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.024393 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192f1855-5895-4928-ad91-e3bded531967","Type":"ContainerDied","Data":"a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d"} Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.024440 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192f1855-5895-4928-ad91-e3bded531967","Type":"ContainerDied","Data":"a60f0efe8fb07eeb18fb57a1f165913b971130ef8a6693c2bd5863d0b6756b90"} Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.024461 4793 scope.go:117] "RemoveContainer" containerID="a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.024482 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.062702 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.066626 4793 scope.go:117] "RemoveContainer" containerID="dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.073733 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.088361 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:10 crc kubenswrapper[4793]: E0130 14:10:10.088974 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-api" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.089051 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-api" Jan 30 14:10:10 crc kubenswrapper[4793]: E0130 14:10:10.089143 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-log" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.089229 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-log" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.089543 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-log" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.089624 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-api" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.090895 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.097275 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.101447 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.101467 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.113805 4793 scope.go:117] "RemoveContainer" containerID="a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d" Jan 30 14:10:10 crc kubenswrapper[4793]: E0130 14:10:10.115220 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d\": container with ID starting with a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d not found: ID does not exist" containerID="a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.115259 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d"} err="failed to get container status \"a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d\": rpc error: code = NotFound desc = could not find container \"a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d\": container with ID starting with a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d not found: ID does not exist" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.115291 4793 scope.go:117] "RemoveContainer" containerID="dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f" Jan 30 14:10:10 crc kubenswrapper[4793]: E0130 14:10:10.116357 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f\": container with ID starting with dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f not found: ID does not exist" containerID="dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.116409 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f"} err="failed to get container status \"dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f\": rpc error: code = NotFound desc = could not find container \"dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f\": container with ID starting with dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f not found: ID does not exist" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.120909 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.210499 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.210882 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlgzf\" (UniqueName: \"kubernetes.io/projected/61f197d5-ac2e-4907-aaaf-78ac1156368c-kube-api-access-mlgzf\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.211026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-config-data\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.211183 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61f197d5-ac2e-4907-aaaf-78ac1156368c-logs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.211290 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-public-tls-certs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.211341 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.314574 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-public-tls-certs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.314685 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.314740 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.315075 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlgzf\" (UniqueName: \"kubernetes.io/projected/61f197d5-ac2e-4907-aaaf-78ac1156368c-kube-api-access-mlgzf\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.316365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-config-data\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.316435 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61f197d5-ac2e-4907-aaaf-78ac1156368c-logs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.317011 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61f197d5-ac2e-4907-aaaf-78ac1156368c-logs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.318472 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.318637 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.318799 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.318994 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.328627 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.330636 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-public-tls-certs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.332673 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-config-data\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.338617 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlgzf\" (UniqueName: \"kubernetes.io/projected/61f197d5-ac2e-4907-aaaf-78ac1156368c-kube-api-access-mlgzf\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.408941 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192f1855-5895-4928-ad91-e3bded531967" path="/var/lib/kubelet/pods/192f1855-5895-4928-ad91-e3bded531967/volumes" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.410613 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.879890 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:10 crc kubenswrapper[4793]: W0130 14:10:10.882965 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61f197d5_ac2e_4907_aaaf_78ac1156368c.slice/crio-e5af47da88468773843af7a9da670710c549d6d5e8612d43433b449ccbe8bb86 WatchSource:0}: Error finding container e5af47da88468773843af7a9da670710c549d6d5e8612d43433b449ccbe8bb86: Status 404 returned error can't find the container with id e5af47da88468773843af7a9da670710c549d6d5e8612d43433b449ccbe8bb86 Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.044515 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61f197d5-ac2e-4907-aaaf-78ac1156368c","Type":"ContainerStarted","Data":"e5af47da88468773843af7a9da670710c549d6d5e8612d43433b449ccbe8bb86"} Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.051077 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerID="35435e31f9baea1e4c9263c0e0abafdae31a9145d621c42772e5dd4993b88a8f" exitCode=0 Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.051109 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerDied","Data":"35435e31f9baea1e4c9263c0e0abafdae31a9145d621c42772e5dd4993b88a8f"} Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.106077 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144111 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-run-httpd\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144189 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-ceilometer-tls-certs\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144253 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-combined-ca-bundle\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144277 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-log-httpd\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144388 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-config-data\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144445 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss2qk\" (UniqueName: \"kubernetes.io/projected/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-kube-api-access-ss2qk\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144503 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-scripts\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144557 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-sg-core-conf-yaml\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.146304 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.146567 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.155149 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-kube-api-access-ss2qk" (OuterVolumeSpecName: "kube-api-access-ss2qk") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "kube-api-access-ss2qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.167495 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-scripts" (OuterVolumeSpecName: "scripts") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.225728 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.242516 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.246567 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.246595 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.246605 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.246614 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.246623 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.246631 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss2qk\" (UniqueName: \"kubernetes.io/projected/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-kube-api-access-ss2qk\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.292370 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.307790 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.327147 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.350241 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.368478 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-config-data" (OuterVolumeSpecName: "config-data") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.452439 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.074712 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerDied","Data":"ee58efa07fa4fa9d8d8272dc1241f3340556be6a43a1bbd522489b6d1c064654"} Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.074767 4793 scope.go:117] "RemoveContainer" containerID="22e3b2b4f8af8c074e2701dd075aff341ca69019ed98db94c94c5c8c8fac5cc3" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.074969 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.078234 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61f197d5-ac2e-4907-aaaf-78ac1156368c","Type":"ContainerStarted","Data":"c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a"} Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.078293 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61f197d5-ac2e-4907-aaaf-78ac1156368c","Type":"ContainerStarted","Data":"9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9"} Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.099156 4793 scope.go:117] "RemoveContainer" containerID="9d08b2914bdb19816d93c8f01afbbd1f5c6993dc4e90cc049ba23dc54276f1e5" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.111621 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.123892 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.123842766 podStartE2EDuration="2.123842766s" podCreationTimestamp="2026-01-30 14:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:12.111983809 +0000 UTC m=+1622.813332300" watchObservedRunningTime="2026-01-30 14:10:12.123842766 +0000 UTC m=+1622.825191257" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.142615 4793 scope.go:117] "RemoveContainer" containerID="35435e31f9baea1e4c9263c0e0abafdae31a9145d621c42772e5dd4993b88a8f" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.167203 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.185987 4793 scope.go:117] "RemoveContainer" containerID="14c5e5290d598f46c34890c9a841a85b87492f2237d89b7ffdeee5e8f99bb6c1" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.188527 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.211086 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:10:12 crc kubenswrapper[4793]: E0130 14:10:12.211837 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-central-agent" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.211865 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-central-agent" Jan 30 14:10:12 crc kubenswrapper[4793]: E0130 14:10:12.211886 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-notification-agent" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.211896 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-notification-agent" Jan 30 14:10:12 crc kubenswrapper[4793]: E0130 14:10:12.211935 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="sg-core" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.211945 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="sg-core" Jan 30 14:10:12 crc kubenswrapper[4793]: E0130 14:10:12.211955 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="proxy-httpd" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.211963 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="proxy-httpd" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.212211 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-notification-agent" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.212251 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="sg-core" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.212262 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="proxy-httpd" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.212276 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-central-agent" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.214344 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.226057 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.226216 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.226473 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.259076 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.274804 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.274917 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-run-httpd\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.274952 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.275028 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-config-data\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.275050 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-log-httpd\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.275086 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.275115 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfq9p\" (UniqueName: \"kubernetes.io/projected/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-kube-api-access-lfq9p\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.275142 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-scripts\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.348459 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-mrwzs"] Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.349602 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.359748 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.361792 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.367915 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mrwzs"] Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.380937 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.381119 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-config-data\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.381197 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-log-httpd\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.381800 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-log-httpd\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.381886 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.382914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfq9p\" (UniqueName: \"kubernetes.io/projected/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-kube-api-access-lfq9p\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.382955 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-scripts\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.383010 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.383128 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-config-data\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.383206 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-scripts\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.383267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-run-httpd\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.383308 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.384086 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-run-httpd\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.385131 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsvph\" (UniqueName: \"kubernetes.io/projected/33ed75d8-77f2-4c4d-b725-b703b8ce2980-kube-api-access-fsvph\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.397139 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-scripts\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.398585 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.399239 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-config-data\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.401655 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.402183 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.410439 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfq9p\" (UniqueName: \"kubernetes.io/projected/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-kube-api-access-lfq9p\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.413728 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.413781 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.419265 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" path="/var/lib/kubelet/pods/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2/volumes" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.420360 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.421087 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.421150 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" gracePeriod=600 Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.489183 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsvph\" (UniqueName: \"kubernetes.io/projected/33ed75d8-77f2-4c4d-b725-b703b8ce2980-kube-api-access-fsvph\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.489491 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.489705 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-config-data\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.489831 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-scripts\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.502244 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.502703 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-scripts\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.504899 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-config-data\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.519703 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsvph\" (UniqueName: \"kubernetes.io/projected/33ed75d8-77f2-4c4d-b725-b703b8ce2980-kube-api-access-fsvph\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: E0130 14:10:12.553906 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.556319 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.672708 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.077162 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:10:13 crc kubenswrapper[4793]: W0130 14:10:13.083949 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9dd9b5_407b_47a1_91ee_5ee7a8b4816d.slice/crio-56f177460d4c30d2d717f450b291a4ba505553f8cff08ffa93d7da1245b03ba4 WatchSource:0}: Error finding container 56f177460d4c30d2d717f450b291a4ba505553f8cff08ffa93d7da1245b03ba4: Status 404 returned error can't find the container with id 56f177460d4c30d2d717f450b291a4ba505553f8cff08ffa93d7da1245b03ba4 Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.096458 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" exitCode=0 Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.096489 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70"} Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.096535 4793 scope.go:117] "RemoveContainer" containerID="f37b4adcd989135b3a0199183c5b09641f48fc83f250e8154636cac5c1ad21e6" Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.097136 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:10:13 crc kubenswrapper[4793]: E0130 14:10:13.097404 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:10:13 crc kubenswrapper[4793]: W0130 14:10:13.344511 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33ed75d8_77f2_4c4d_b725_b703b8ce2980.slice/crio-d5d05855063a2f1e60b05519ff4b4fb82e6468ce1afe8545a33be9c04136662c WatchSource:0}: Error finding container d5d05855063a2f1e60b05519ff4b4fb82e6468ce1afe8545a33be9c04136662c: Status 404 returned error can't find the container with id d5d05855063a2f1e60b05519ff4b4fb82e6468ce1afe8545a33be9c04136662c Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.347240 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mrwzs"] Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.441226 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.512517 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-n2s4l"] Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.513267 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" podUID="1817ab34-b020-4268-b88c-126dc437c966" containerName="dnsmasq-dns" containerID="cri-o://62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad" gracePeriod=10 Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.032347 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.112883 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerStarted","Data":"67cd78805cfd71182011eb15b3b8e8abf6d3edb3e63f79fbcc6bba28ee33409f"} Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.112922 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerStarted","Data":"56f177460d4c30d2d717f450b291a4ba505553f8cff08ffa93d7da1245b03ba4"} Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.114807 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mrwzs" event={"ID":"33ed75d8-77f2-4c4d-b725-b703b8ce2980","Type":"ContainerStarted","Data":"596a656189ddb8dd9803e2c0c8dc2a8724dea1aee86c92cab0644fce8e091c80"} Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.114849 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mrwzs" event={"ID":"33ed75d8-77f2-4c4d-b725-b703b8ce2980","Type":"ContainerStarted","Data":"d5d05855063a2f1e60b05519ff4b4fb82e6468ce1afe8545a33be9c04136662c"} Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.116848 4793 generic.go:334] "Generic (PLEG): container finished" podID="1817ab34-b020-4268-b88c-126dc437c966" containerID="62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad" exitCode=0 Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.116903 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" event={"ID":"1817ab34-b020-4268-b88c-126dc437c966","Type":"ContainerDied","Data":"62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad"} Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.116930 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" event={"ID":"1817ab34-b020-4268-b88c-126dc437c966","Type":"ContainerDied","Data":"51b9f220023c2df2b6b701ab065f62d75d5f6cee33ff2d1780a9cb8c10fdb12d"} Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.116949 4793 scope.go:117] "RemoveContainer" containerID="62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.117149 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.124330 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-config\") pod \"1817ab34-b020-4268-b88c-126dc437c966\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.124410 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-svc\") pod \"1817ab34-b020-4268-b88c-126dc437c966\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.124436 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-nb\") pod \"1817ab34-b020-4268-b88c-126dc437c966\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.124501 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-swift-storage-0\") pod \"1817ab34-b020-4268-b88c-126dc437c966\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.124581 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-sb\") pod \"1817ab34-b020-4268-b88c-126dc437c966\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.124673 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj6mz\" (UniqueName: \"kubernetes.io/projected/1817ab34-b020-4268-b88c-126dc437c966-kube-api-access-nj6mz\") pod \"1817ab34-b020-4268-b88c-126dc437c966\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.137762 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1817ab34-b020-4268-b88c-126dc437c966-kube-api-access-nj6mz" (OuterVolumeSpecName: "kube-api-access-nj6mz") pod "1817ab34-b020-4268-b88c-126dc437c966" (UID: "1817ab34-b020-4268-b88c-126dc437c966"). InnerVolumeSpecName "kube-api-access-nj6mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.152707 4793 scope.go:117] "RemoveContainer" containerID="7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.154772 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-mrwzs" podStartSLOduration=2.154746314 podStartE2EDuration="2.154746314s" podCreationTimestamp="2026-01-30 14:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:14.143203804 +0000 UTC m=+1624.844552315" watchObservedRunningTime="2026-01-30 14:10:14.154746314 +0000 UTC m=+1624.856094805" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.185927 4793 scope.go:117] "RemoveContainer" containerID="62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad" Jan 30 14:10:14 crc kubenswrapper[4793]: E0130 14:10:14.186591 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad\": container with ID starting with 62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad not found: ID does not exist" containerID="62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.186744 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad"} err="failed to get container status \"62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad\": rpc error: code = NotFound desc = could not find container \"62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad\": container with ID starting with 62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad not found: ID does not exist" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.186842 4793 scope.go:117] "RemoveContainer" containerID="7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b" Jan 30 14:10:14 crc kubenswrapper[4793]: E0130 14:10:14.187260 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b\": container with ID starting with 7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b not found: ID does not exist" containerID="7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.187350 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b"} err="failed to get container status \"7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b\": rpc error: code = NotFound desc = could not find container \"7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b\": container with ID starting with 7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b not found: ID does not exist" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.198765 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-config" (OuterVolumeSpecName: "config") pod "1817ab34-b020-4268-b88c-126dc437c966" (UID: "1817ab34-b020-4268-b88c-126dc437c966"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.201588 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1817ab34-b020-4268-b88c-126dc437c966" (UID: "1817ab34-b020-4268-b88c-126dc437c966"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.203642 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1817ab34-b020-4268-b88c-126dc437c966" (UID: "1817ab34-b020-4268-b88c-126dc437c966"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.221550 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1817ab34-b020-4268-b88c-126dc437c966" (UID: "1817ab34-b020-4268-b88c-126dc437c966"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.230929 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.230965 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.230981 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj6mz\" (UniqueName: \"kubernetes.io/projected/1817ab34-b020-4268-b88c-126dc437c966-kube-api-access-nj6mz\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.230990 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.230999 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.257714 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1817ab34-b020-4268-b88c-126dc437c966" (UID: "1817ab34-b020-4268-b88c-126dc437c966"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.332263 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.565172 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-n2s4l"] Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.580567 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-n2s4l"] Jan 30 14:10:15 crc kubenswrapper[4793]: I0130 14:10:15.129788 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerStarted","Data":"b9118352f798bed71e82ce4b518d07c49a400170692d3a7bebe81a94dcc220cb"} Jan 30 14:10:16 crc kubenswrapper[4793]: I0130 14:10:16.523998 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1817ab34-b020-4268-b88c-126dc437c966" path="/var/lib/kubelet/pods/1817ab34-b020-4268-b88c-126dc437c966/volumes" Jan 30 14:10:16 crc kubenswrapper[4793]: I0130 14:10:16.526455 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerStarted","Data":"d49c9d2a1050f5ed243c6b3a7b6b86330cedaed1d8a0565394963de272b03130"} Jan 30 14:10:19 crc kubenswrapper[4793]: I0130 14:10:19.557964 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerStarted","Data":"efc7d228e44e2727aabe5ea1aba8c086103d815b77e7b65c5e18fc1aa1831899"} Jan 30 14:10:19 crc kubenswrapper[4793]: I0130 14:10:19.559174 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:10:19 crc kubenswrapper[4793]: I0130 14:10:19.596027 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.161094779 podStartE2EDuration="7.596007711s" podCreationTimestamp="2026-01-30 14:10:12 +0000 UTC" firstStartedPulling="2026-01-30 14:10:13.087208749 +0000 UTC m=+1623.788557240" lastFinishedPulling="2026-01-30 14:10:18.522121681 +0000 UTC m=+1629.223470172" observedRunningTime="2026-01-30 14:10:19.589428831 +0000 UTC m=+1630.290777342" watchObservedRunningTime="2026-01-30 14:10:19.596007711 +0000 UTC m=+1630.297356202" Jan 30 14:10:20 crc kubenswrapper[4793]: I0130 14:10:20.413878 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:10:20 crc kubenswrapper[4793]: I0130 14:10:20.414207 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:10:20 crc kubenswrapper[4793]: I0130 14:10:20.571426 4793 generic.go:334] "Generic (PLEG): container finished" podID="33ed75d8-77f2-4c4d-b725-b703b8ce2980" containerID="596a656189ddb8dd9803e2c0c8dc2a8724dea1aee86c92cab0644fce8e091c80" exitCode=0 Jan 30 14:10:20 crc kubenswrapper[4793]: I0130 14:10:20.572708 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mrwzs" event={"ID":"33ed75d8-77f2-4c4d-b725-b703b8ce2980","Type":"ContainerDied","Data":"596a656189ddb8dd9803e2c0c8dc2a8724dea1aee86c92cab0644fce8e091c80"} Jan 30 14:10:21 crc kubenswrapper[4793]: I0130 14:10:21.431366 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:10:21 crc kubenswrapper[4793]: I0130 14:10:21.431396 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:10:21 crc kubenswrapper[4793]: I0130 14:10:21.969932 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.088685 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-config-data\") pod \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.088790 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-combined-ca-bundle\") pod \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.088965 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsvph\" (UniqueName: \"kubernetes.io/projected/33ed75d8-77f2-4c4d-b725-b703b8ce2980-kube-api-access-fsvph\") pod \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.089013 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-scripts\") pod \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.094151 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-scripts" (OuterVolumeSpecName: "scripts") pod "33ed75d8-77f2-4c4d-b725-b703b8ce2980" (UID: "33ed75d8-77f2-4c4d-b725-b703b8ce2980"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.094243 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ed75d8-77f2-4c4d-b725-b703b8ce2980-kube-api-access-fsvph" (OuterVolumeSpecName: "kube-api-access-fsvph") pod "33ed75d8-77f2-4c4d-b725-b703b8ce2980" (UID: "33ed75d8-77f2-4c4d-b725-b703b8ce2980"). InnerVolumeSpecName "kube-api-access-fsvph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.117148 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-config-data" (OuterVolumeSpecName: "config-data") pod "33ed75d8-77f2-4c4d-b725-b703b8ce2980" (UID: "33ed75d8-77f2-4c4d-b725-b703b8ce2980"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.122257 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33ed75d8-77f2-4c4d-b725-b703b8ce2980" (UID: "33ed75d8-77f2-4c4d-b725-b703b8ce2980"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.191970 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.192015 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsvph\" (UniqueName: \"kubernetes.io/projected/33ed75d8-77f2-4c4d-b725-b703b8ce2980-kube-api-access-fsvph\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.192032 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.192068 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.614648 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mrwzs" event={"ID":"33ed75d8-77f2-4c4d-b725-b703b8ce2980","Type":"ContainerDied","Data":"d5d05855063a2f1e60b05519ff4b4fb82e6468ce1afe8545a33be9c04136662c"} Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.614929 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d05855063a2f1e60b05519ff4b4fb82e6468ce1afe8545a33be9c04136662c" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.615087 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.807810 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.808114 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b0772278-2936-43a7-b8e8-255d72a26a46" containerName="nova-scheduler-scheduler" containerID="cri-o://fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" gracePeriod=30 Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.822513 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.822924 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-log" containerID="cri-o://9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9" gracePeriod=30 Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.823586 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-api" containerID="cri-o://c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a" gracePeriod=30 Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.838524 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.838802 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-log" containerID="cri-o://08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04" gracePeriod=30 Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.840745 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-metadata" containerID="cri-o://cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f" gracePeriod=30 Jan 30 14:10:23 crc kubenswrapper[4793]: I0130 14:10:23.624406 4793 generic.go:334] "Generic (PLEG): container finished" podID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerID="08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04" exitCode=143 Jan 30 14:10:23 crc kubenswrapper[4793]: I0130 14:10:23.624549 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49ed6c75-bf0d-4f2f-a470-42fd54e304da","Type":"ContainerDied","Data":"08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04"} Jan 30 14:10:23 crc kubenswrapper[4793]: I0130 14:10:23.626880 4793 generic.go:334] "Generic (PLEG): container finished" podID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerID="9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9" exitCode=143 Jan 30 14:10:23 crc kubenswrapper[4793]: I0130 14:10:23.626924 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61f197d5-ac2e-4907-aaaf-78ac1156368c","Type":"ContainerDied","Data":"9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9"} Jan 30 14:10:23 crc kubenswrapper[4793]: E0130 14:10:23.909746 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 14:10:23 crc kubenswrapper[4793]: E0130 14:10:23.910969 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 14:10:23 crc kubenswrapper[4793]: E0130 14:10:23.912430 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 14:10:23 crc kubenswrapper[4793]: E0130 14:10:23.912467 4793 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b0772278-2936-43a7-b8e8-255d72a26a46" containerName="nova-scheduler-scheduler" Jan 30 14:10:24 crc kubenswrapper[4793]: I0130 14:10:24.398793 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:10:24 crc kubenswrapper[4793]: E0130 14:10:24.399081 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:10:25 crc kubenswrapper[4793]: I0130 14:10:25.999473 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:52316->10.217.0.194:8775: read: connection reset by peer" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:25.999529 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:52314->10.217.0.194:8775: read: connection reset by peer" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.463952 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.596561 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ed6c75-bf0d-4f2f-a470-42fd54e304da-logs\") pod \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.597109 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kzp9\" (UniqueName: \"kubernetes.io/projected/49ed6c75-bf0d-4f2f-a470-42fd54e304da-kube-api-access-7kzp9\") pod \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.597230 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-nova-metadata-tls-certs\") pod \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.597295 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-combined-ca-bundle\") pod \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.597329 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-config-data\") pod \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.604764 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49ed6c75-bf0d-4f2f-a470-42fd54e304da-logs" (OuterVolumeSpecName: "logs") pod "49ed6c75-bf0d-4f2f-a470-42fd54e304da" (UID: "49ed6c75-bf0d-4f2f-a470-42fd54e304da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.632364 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ed6c75-bf0d-4f2f-a470-42fd54e304da-kube-api-access-7kzp9" (OuterVolumeSpecName: "kube-api-access-7kzp9") pod "49ed6c75-bf0d-4f2f-a470-42fd54e304da" (UID: "49ed6c75-bf0d-4f2f-a470-42fd54e304da"). InnerVolumeSpecName "kube-api-access-7kzp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.665714 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-config-data" (OuterVolumeSpecName: "config-data") pod "49ed6c75-bf0d-4f2f-a470-42fd54e304da" (UID: "49ed6c75-bf0d-4f2f-a470-42fd54e304da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.671618 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49ed6c75-bf0d-4f2f-a470-42fd54e304da" (UID: "49ed6c75-bf0d-4f2f-a470-42fd54e304da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.679403 4793 generic.go:334] "Generic (PLEG): container finished" podID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerID="cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f" exitCode=0 Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.679450 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49ed6c75-bf0d-4f2f-a470-42fd54e304da","Type":"ContainerDied","Data":"cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f"} Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.679478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49ed6c75-bf0d-4f2f-a470-42fd54e304da","Type":"ContainerDied","Data":"8e827d18d94a36e1032ee13a7b09882361977c3cc27e172ae22dfb68a0554721"} Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.679496 4793 scope.go:117] "RemoveContainer" containerID="cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.679502 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.687219 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "49ed6c75-bf0d-4f2f-a470-42fd54e304da" (UID: "49ed6c75-bf0d-4f2f-a470-42fd54e304da"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.699892 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ed6c75-bf0d-4f2f-a470-42fd54e304da-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.699918 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kzp9\" (UniqueName: \"kubernetes.io/projected/49ed6c75-bf0d-4f2f-a470-42fd54e304da-kube-api-access-7kzp9\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.699929 4793 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.699937 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.699946 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.739571 4793 scope.go:117] "RemoveContainer" containerID="08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.762123 4793 scope.go:117] "RemoveContainer" containerID="cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f" Jan 30 14:10:26 crc kubenswrapper[4793]: E0130 14:10:26.765491 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f\": container with ID starting with cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f not found: ID does not exist" containerID="cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.765555 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f"} err="failed to get container status \"cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f\": rpc error: code = NotFound desc = could not find container \"cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f\": container with ID starting with cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f not found: ID does not exist" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.765589 4793 scope.go:117] "RemoveContainer" containerID="08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04" Jan 30 14:10:26 crc kubenswrapper[4793]: E0130 14:10:26.766168 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04\": container with ID starting with 08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04 not found: ID does not exist" containerID="08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.766223 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04"} err="failed to get container status \"08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04\": rpc error: code = NotFound desc = could not find container \"08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04\": container with ID starting with 08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04 not found: ID does not exist" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.024208 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.040102 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.056338 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.056844 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ed75d8-77f2-4c4d-b725-b703b8ce2980" containerName="nova-manage" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.056866 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ed75d8-77f2-4c4d-b725-b703b8ce2980" containerName="nova-manage" Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.056885 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-metadata" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.056893 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-metadata" Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.056919 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1817ab34-b020-4268-b88c-126dc437c966" containerName="dnsmasq-dns" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.056928 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1817ab34-b020-4268-b88c-126dc437c966" containerName="dnsmasq-dns" Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.056944 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-log" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.056951 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-log" Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.056966 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1817ab34-b020-4268-b88c-126dc437c966" containerName="init" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.056973 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1817ab34-b020-4268-b88c-126dc437c966" containerName="init" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.057203 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ed75d8-77f2-4c4d-b725-b703b8ce2980" containerName="nova-manage" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.057237 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-log" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.057247 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1817ab34-b020-4268-b88c-126dc437c966" containerName="dnsmasq-dns" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.057262 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-metadata" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.058510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.065498 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.065776 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.069770 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.111813 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02223b96-2b8b-4d32-b7ba-9cb517e03f13-logs\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.111964 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.112077 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptjl2\" (UniqueName: \"kubernetes.io/projected/02223b96-2b8b-4d32-b7ba-9cb517e03f13-kube-api-access-ptjl2\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.112103 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.112159 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-config-data\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.213841 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-config-data\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.214589 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02223b96-2b8b-4d32-b7ba-9cb517e03f13-logs\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.214815 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.214940 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02223b96-2b8b-4d32-b7ba-9cb517e03f13-logs\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.215028 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.215140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptjl2\" (UniqueName: \"kubernetes.io/projected/02223b96-2b8b-4d32-b7ba-9cb517e03f13-kube-api-access-ptjl2\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.220371 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.225896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-config-data\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.226902 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.231434 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptjl2\" (UniqueName: \"kubernetes.io/projected/02223b96-2b8b-4d32-b7ba-9cb517e03f13-kube-api-access-ptjl2\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.428869 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.629162 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.689475 4793 generic.go:334] "Generic (PLEG): container finished" podID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerID="c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a" exitCode=0 Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.689534 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61f197d5-ac2e-4907-aaaf-78ac1156368c","Type":"ContainerDied","Data":"c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a"} Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.689561 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61f197d5-ac2e-4907-aaaf-78ac1156368c","Type":"ContainerDied","Data":"e5af47da88468773843af7a9da670710c549d6d5e8612d43433b449ccbe8bb86"} Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.689579 4793 scope.go:117] "RemoveContainer" containerID="c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.689676 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.719143 4793 scope.go:117] "RemoveContainer" containerID="9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.728542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61f197d5-ac2e-4907-aaaf-78ac1156368c-logs\") pod \"61f197d5-ac2e-4907-aaaf-78ac1156368c\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.728583 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-combined-ca-bundle\") pod \"61f197d5-ac2e-4907-aaaf-78ac1156368c\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.728644 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-public-tls-certs\") pod \"61f197d5-ac2e-4907-aaaf-78ac1156368c\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.728677 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlgzf\" (UniqueName: \"kubernetes.io/projected/61f197d5-ac2e-4907-aaaf-78ac1156368c-kube-api-access-mlgzf\") pod \"61f197d5-ac2e-4907-aaaf-78ac1156368c\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.728724 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-config-data\") pod \"61f197d5-ac2e-4907-aaaf-78ac1156368c\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.728810 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-internal-tls-certs\") pod \"61f197d5-ac2e-4907-aaaf-78ac1156368c\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.729650 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f197d5-ac2e-4907-aaaf-78ac1156368c-logs" (OuterVolumeSpecName: "logs") pod "61f197d5-ac2e-4907-aaaf-78ac1156368c" (UID: "61f197d5-ac2e-4907-aaaf-78ac1156368c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.739965 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f197d5-ac2e-4907-aaaf-78ac1156368c-kube-api-access-mlgzf" (OuterVolumeSpecName: "kube-api-access-mlgzf") pod "61f197d5-ac2e-4907-aaaf-78ac1156368c" (UID: "61f197d5-ac2e-4907-aaaf-78ac1156368c"). InnerVolumeSpecName "kube-api-access-mlgzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.754241 4793 scope.go:117] "RemoveContainer" containerID="c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a" Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.754983 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a\": container with ID starting with c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a not found: ID does not exist" containerID="c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.755034 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a"} err="failed to get container status \"c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a\": rpc error: code = NotFound desc = could not find container \"c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a\": container with ID starting with c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a not found: ID does not exist" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.755068 4793 scope.go:117] "RemoveContainer" containerID="9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9" Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.755535 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9\": container with ID starting with 9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9 not found: ID does not exist" containerID="9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.755559 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9"} err="failed to get container status \"9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9\": rpc error: code = NotFound desc = could not find container \"9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9\": container with ID starting with 9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9 not found: ID does not exist" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.758027 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-config-data" (OuterVolumeSpecName: "config-data") pod "61f197d5-ac2e-4907-aaaf-78ac1156368c" (UID: "61f197d5-ac2e-4907-aaaf-78ac1156368c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.760179 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61f197d5-ac2e-4907-aaaf-78ac1156368c" (UID: "61f197d5-ac2e-4907-aaaf-78ac1156368c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.792024 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "61f197d5-ac2e-4907-aaaf-78ac1156368c" (UID: "61f197d5-ac2e-4907-aaaf-78ac1156368c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.797937 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "61f197d5-ac2e-4907-aaaf-78ac1156368c" (UID: "61f197d5-ac2e-4907-aaaf-78ac1156368c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.830776 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61f197d5-ac2e-4907-aaaf-78ac1156368c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.830817 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.830843 4793 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.830857 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlgzf\" (UniqueName: \"kubernetes.io/projected/61f197d5-ac2e-4907-aaaf-78ac1156368c-kube-api-access-mlgzf\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.830869 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.830877 4793 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.937964 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:10:27 crc kubenswrapper[4793]: W0130 14:10:27.943982 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02223b96_2b8b_4d32_b7ba_9cb517e03f13.slice/crio-47d3c6ee13331f5692f6d6bda16293a43f64ff62abadf9696460b0dff80e4660 WatchSource:0}: Error finding container 47d3c6ee13331f5692f6d6bda16293a43f64ff62abadf9696460b0dff80e4660: Status 404 returned error can't find the container with id 47d3c6ee13331f5692f6d6bda16293a43f64ff62abadf9696460b0dff80e4660 Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.082969 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.097066 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.117218 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: E0130 14:10:28.117779 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-api" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.117804 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-api" Jan 30 14:10:28 crc kubenswrapper[4793]: E0130 14:10:28.117848 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-log" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.117857 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-log" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.118123 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-log" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.118161 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-api" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.119480 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.124150 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.124235 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.124370 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.126567 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.135552 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-config-data\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.135792 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-logs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.135911 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.136011 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9ddc\" (UniqueName: \"kubernetes.io/projected/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-kube-api-access-w9ddc\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.136087 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.136207 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.239895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.239946 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-config-data\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.239983 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-logs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.240043 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.240080 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9ddc\" (UniqueName: \"kubernetes.io/projected/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-kube-api-access-w9ddc\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.240098 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.240974 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-logs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.248378 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.248419 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.250666 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-config-data\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.261100 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9ddc\" (UniqueName: \"kubernetes.io/projected/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-kube-api-access-w9ddc\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.261520 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.411260 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" path="/var/lib/kubelet/pods/49ed6c75-bf0d-4f2f-a470-42fd54e304da/volumes" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.413145 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" path="/var/lib/kubelet/pods/61f197d5-ac2e-4907-aaaf-78ac1156368c/volumes" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.434675 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.478342 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.545724 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7x6x\" (UniqueName: \"kubernetes.io/projected/b0772278-2936-43a7-b8e8-255d72a26a46-kube-api-access-r7x6x\") pod \"b0772278-2936-43a7-b8e8-255d72a26a46\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.545778 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-config-data\") pod \"b0772278-2936-43a7-b8e8-255d72a26a46\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.546451 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-combined-ca-bundle\") pod \"b0772278-2936-43a7-b8e8-255d72a26a46\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.571346 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0772278-2936-43a7-b8e8-255d72a26a46-kube-api-access-r7x6x" (OuterVolumeSpecName: "kube-api-access-r7x6x") pod "b0772278-2936-43a7-b8e8-255d72a26a46" (UID: "b0772278-2936-43a7-b8e8-255d72a26a46"). InnerVolumeSpecName "kube-api-access-r7x6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.588174 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-config-data" (OuterVolumeSpecName: "config-data") pod "b0772278-2936-43a7-b8e8-255d72a26a46" (UID: "b0772278-2936-43a7-b8e8-255d72a26a46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.610428 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0772278-2936-43a7-b8e8-255d72a26a46" (UID: "b0772278-2936-43a7-b8e8-255d72a26a46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.649806 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.649848 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7x6x\" (UniqueName: \"kubernetes.io/projected/b0772278-2936-43a7-b8e8-255d72a26a46-kube-api-access-r7x6x\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.649865 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.712210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"02223b96-2b8b-4d32-b7ba-9cb517e03f13","Type":"ContainerStarted","Data":"b5332e1f855d542a3aec1e3972120fafd4540f19940a9b97a1d6286167ac2d00"} Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.712252 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"02223b96-2b8b-4d32-b7ba-9cb517e03f13","Type":"ContainerStarted","Data":"ff9fb94535fef65e311e19c7b9311a348c9264d1affd60b0bc5d3319b07a49e9"} Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.712261 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"02223b96-2b8b-4d32-b7ba-9cb517e03f13","Type":"ContainerStarted","Data":"47d3c6ee13331f5692f6d6bda16293a43f64ff62abadf9696460b0dff80e4660"} Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.717770 4793 generic.go:334] "Generic (PLEG): container finished" podID="b0772278-2936-43a7-b8e8-255d72a26a46" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" exitCode=0 Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.717850 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0772278-2936-43a7-b8e8-255d72a26a46","Type":"ContainerDied","Data":"fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a"} Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.717878 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0772278-2936-43a7-b8e8-255d72a26a46","Type":"ContainerDied","Data":"0c43fd7a19c8e62a860f534d7237c66cb3f8e183b6b7d0b236a6b8cd04692810"} Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.717898 4793 scope.go:117] "RemoveContainer" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.718038 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.748574 4793 scope.go:117] "RemoveContainer" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" Jan 30 14:10:28 crc kubenswrapper[4793]: E0130 14:10:28.750261 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a\": container with ID starting with fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a not found: ID does not exist" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.750292 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a"} err="failed to get container status \"fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a\": rpc error: code = NotFound desc = could not find container \"fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a\": container with ID starting with fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a not found: ID does not exist" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.750606 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.750580722 podStartE2EDuration="1.750580722s" podCreationTimestamp="2026-01-30 14:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:28.743736646 +0000 UTC m=+1639.445085137" watchObservedRunningTime="2026-01-30 14:10:28.750580722 +0000 UTC m=+1639.451929233" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.770640 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.790503 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.804641 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: E0130 14:10:28.805353 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0772278-2936-43a7-b8e8-255d72a26a46" containerName="nova-scheduler-scheduler" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.805366 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0772278-2936-43a7-b8e8-255d72a26a46" containerName="nova-scheduler-scheduler" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.805554 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0772278-2936-43a7-b8e8-255d72a26a46" containerName="nova-scheduler-scheduler" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.806450 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.813033 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.834518 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.852574 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvc2\" (UniqueName: \"kubernetes.io/projected/9e04e820-112a-4afa-b908-f9b8be3e9e7c-kube-api-access-9mvc2\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.852659 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e04e820-112a-4afa-b908-f9b8be3e9e7c-config-data\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.852726 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e04e820-112a-4afa-b908-f9b8be3e9e7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.954059 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mvc2\" (UniqueName: \"kubernetes.io/projected/9e04e820-112a-4afa-b908-f9b8be3e9e7c-kube-api-access-9mvc2\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.954152 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e04e820-112a-4afa-b908-f9b8be3e9e7c-config-data\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.954221 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e04e820-112a-4afa-b908-f9b8be3e9e7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.959565 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e04e820-112a-4afa-b908-f9b8be3e9e7c-config-data\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.959600 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e04e820-112a-4afa-b908-f9b8be3e9e7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.972241 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mvc2\" (UniqueName: \"kubernetes.io/projected/9e04e820-112a-4afa-b908-f9b8be3e9e7c-kube-api-access-9mvc2\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: W0130 14:10:28.989776 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b4991f7_e6e6_4dfd_a75b_25a7506591e1.slice/crio-d921198f65da1edba7ae4c7525167b4c85f3f6c55c0489270c831ae20a548f2e WatchSource:0}: Error finding container d921198f65da1edba7ae4c7525167b4c85f3f6c55c0489270c831ae20a548f2e: Status 404 returned error can't find the container with id d921198f65da1edba7ae4c7525167b4c85f3f6c55c0489270c831ae20a548f2e Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.990157 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.134618 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.602416 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:10:29 crc kubenswrapper[4793]: W0130 14:10:29.606611 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e04e820_112a_4afa_b908_f9b8be3e9e7c.slice/crio-b762b3c3e68b9152633fdaa88266289c1d0db7cbd50bef1d3b9594f5bf9ad7dc WatchSource:0}: Error finding container b762b3c3e68b9152633fdaa88266289c1d0db7cbd50bef1d3b9594f5bf9ad7dc: Status 404 returned error can't find the container with id b762b3c3e68b9152633fdaa88266289c1d0db7cbd50bef1d3b9594f5bf9ad7dc Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.733716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9e04e820-112a-4afa-b908-f9b8be3e9e7c","Type":"ContainerStarted","Data":"b762b3c3e68b9152633fdaa88266289c1d0db7cbd50bef1d3b9594f5bf9ad7dc"} Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.735569 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b4991f7-e6e6-4dfd-a75b-25a7506591e1","Type":"ContainerStarted","Data":"87246b291ffab77db78cc65ecd8c0fd944c2bd447077a37a61c96e2ab8c54184"} Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.735600 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b4991f7-e6e6-4dfd-a75b-25a7506591e1","Type":"ContainerStarted","Data":"89cb391b4339b9ea2b2f0ba87faab6ade18019ef0fd9cfb5a91677f13cadc744"} Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.735615 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b4991f7-e6e6-4dfd-a75b-25a7506591e1","Type":"ContainerStarted","Data":"d921198f65da1edba7ae4c7525167b4c85f3f6c55c0489270c831ae20a548f2e"} Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.753077 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.752264762 podStartE2EDuration="1.752264762s" podCreationTimestamp="2026-01-30 14:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:29.751126944 +0000 UTC m=+1640.452475465" watchObservedRunningTime="2026-01-30 14:10:29.752264762 +0000 UTC m=+1640.453613253" Jan 30 14:10:30 crc kubenswrapper[4793]: I0130 14:10:30.410021 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0772278-2936-43a7-b8e8-255d72a26a46" path="/var/lib/kubelet/pods/b0772278-2936-43a7-b8e8-255d72a26a46/volumes" Jan 30 14:10:30 crc kubenswrapper[4793]: I0130 14:10:30.745815 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9e04e820-112a-4afa-b908-f9b8be3e9e7c","Type":"ContainerStarted","Data":"5fa98a9f2da8132b5f12402c1cbcf5b1d9acbf355abda26521806509c5c1864c"} Jan 30 14:10:30 crc kubenswrapper[4793]: I0130 14:10:30.768650 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.768625208 podStartE2EDuration="2.768625208s" podCreationTimestamp="2026-01-30 14:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:30.760583083 +0000 UTC m=+1641.461931584" watchObservedRunningTime="2026-01-30 14:10:30.768625208 +0000 UTC m=+1641.469973699" Jan 30 14:10:32 crc kubenswrapper[4793]: I0130 14:10:32.429815 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 14:10:32 crc kubenswrapper[4793]: I0130 14:10:32.429912 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 14:10:34 crc kubenswrapper[4793]: I0130 14:10:34.135607 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 14:10:36 crc kubenswrapper[4793]: I0130 14:10:36.398657 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:10:36 crc kubenswrapper[4793]: E0130 14:10:36.399607 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.308241 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cwn45"] Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.310596 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.358659 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-utilities\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.358898 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-catalog-content\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.359137 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpjd7\" (UniqueName: \"kubernetes.io/projected/ea9c91d0-f921-4b9e-a37b-9d50419d506e-kube-api-access-rpjd7\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.413205 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwn45"] Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.429178 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.429232 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.460354 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpjd7\" (UniqueName: \"kubernetes.io/projected/ea9c91d0-f921-4b9e-a37b-9d50419d506e-kube-api-access-rpjd7\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.460494 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-utilities\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.460530 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-catalog-content\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.461314 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-utilities\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.461425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-catalog-content\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.481109 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpjd7\" (UniqueName: \"kubernetes.io/projected/ea9c91d0-f921-4b9e-a37b-9d50419d506e-kube-api-access-rpjd7\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.631423 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.126034 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwn45"] Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.442223 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="02223b96-2b8b-4d32-b7ba-9cb517e03f13" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.442300 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="02223b96-2b8b-4d32-b7ba-9cb517e03f13" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.480462 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.480507 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.826856 4793 generic.go:334] "Generic (PLEG): container finished" podID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerID="6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b" exitCode=0 Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.826907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerDied","Data":"6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b"} Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.826956 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerStarted","Data":"4354863d5270a2dd978e9ec14ef4a0fa31ed07055c5a9a9b9bc5612d7fef101e"} Jan 30 14:10:39 crc kubenswrapper[4793]: I0130 14:10:39.136073 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 14:10:39 crc kubenswrapper[4793]: I0130 14:10:39.177875 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 14:10:39 crc kubenswrapper[4793]: I0130 14:10:39.494303 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b4991f7-e6e6-4dfd-a75b-25a7506591e1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:10:39 crc kubenswrapper[4793]: I0130 14:10:39.494993 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b4991f7-e6e6-4dfd-a75b-25a7506591e1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:10:39 crc kubenswrapper[4793]: I0130 14:10:39.837131 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerStarted","Data":"7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132"} Jan 30 14:10:39 crc kubenswrapper[4793]: I0130 14:10:39.899554 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 14:10:42 crc kubenswrapper[4793]: I0130 14:10:42.619143 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 14:10:42 crc kubenswrapper[4793]: I0130 14:10:42.876460 4793 generic.go:334] "Generic (PLEG): container finished" podID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerID="7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132" exitCode=0 Jan 30 14:10:42 crc kubenswrapper[4793]: I0130 14:10:42.876541 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerDied","Data":"7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132"} Jan 30 14:10:43 crc kubenswrapper[4793]: I0130 14:10:43.888684 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerStarted","Data":"482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099"} Jan 30 14:10:43 crc kubenswrapper[4793]: I0130 14:10:43.920267 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cwn45" podStartSLOduration=2.347998179 podStartE2EDuration="6.920243017s" podCreationTimestamp="2026-01-30 14:10:37 +0000 UTC" firstStartedPulling="2026-01-30 14:10:38.828843025 +0000 UTC m=+1649.530191516" lastFinishedPulling="2026-01-30 14:10:43.401087843 +0000 UTC m=+1654.102436354" observedRunningTime="2026-01-30 14:10:43.914414596 +0000 UTC m=+1654.615763167" watchObservedRunningTime="2026-01-30 14:10:43.920243017 +0000 UTC m=+1654.621591518" Jan 30 14:10:47 crc kubenswrapper[4793]: I0130 14:10:47.434702 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 14:10:47 crc kubenswrapper[4793]: I0130 14:10:47.435691 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 14:10:47 crc kubenswrapper[4793]: I0130 14:10:47.440480 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 14:10:47 crc kubenswrapper[4793]: I0130 14:10:47.632807 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:47 crc kubenswrapper[4793]: I0130 14:10:47.633033 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:47 crc kubenswrapper[4793]: I0130 14:10:47.955236 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.494662 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.495504 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.495624 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.502388 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.672990 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cwn45" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="registry-server" probeResult="failure" output=< Jan 30 14:10:48 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:10:48 crc kubenswrapper[4793]: > Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.958882 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.967909 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 14:10:49 crc kubenswrapper[4793]: I0130 14:10:49.398856 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:10:49 crc kubenswrapper[4793]: E0130 14:10:49.399186 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:10:56 crc kubenswrapper[4793]: I0130 14:10:56.749822 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:10:57 crc kubenswrapper[4793]: I0130 14:10:57.751142 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:57 crc kubenswrapper[4793]: I0130 14:10:57.844167 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:58 crc kubenswrapper[4793]: I0130 14:10:58.056930 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwn45"] Jan 30 14:10:58 crc kubenswrapper[4793]: I0130 14:10:58.510336 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.046651 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cwn45" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="registry-server" containerID="cri-o://482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099" gracePeriod=2 Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.774128 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.915929 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpjd7\" (UniqueName: \"kubernetes.io/projected/ea9c91d0-f921-4b9e-a37b-9d50419d506e-kube-api-access-rpjd7\") pod \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.915995 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-catalog-content\") pod \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.916028 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-utilities\") pod \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.917308 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-utilities" (OuterVolumeSpecName: "utilities") pod "ea9c91d0-f921-4b9e-a37b-9d50419d506e" (UID: "ea9c91d0-f921-4b9e-a37b-9d50419d506e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.938933 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea9c91d0-f921-4b9e-a37b-9d50419d506e-kube-api-access-rpjd7" (OuterVolumeSpecName: "kube-api-access-rpjd7") pod "ea9c91d0-f921-4b9e-a37b-9d50419d506e" (UID: "ea9c91d0-f921-4b9e-a37b-9d50419d506e"). InnerVolumeSpecName "kube-api-access-rpjd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.998164 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea9c91d0-f921-4b9e-a37b-9d50419d506e" (UID: "ea9c91d0-f921-4b9e-a37b-9d50419d506e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.018146 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpjd7\" (UniqueName: \"kubernetes.io/projected/ea9c91d0-f921-4b9e-a37b-9d50419d506e-kube-api-access-rpjd7\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.018190 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.018202 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.058073 4793 generic.go:334] "Generic (PLEG): container finished" podID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerID="482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099" exitCode=0 Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.058119 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerDied","Data":"482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099"} Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.058160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerDied","Data":"4354863d5270a2dd978e9ec14ef4a0fa31ed07055c5a9a9b9bc5612d7fef101e"} Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.058159 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.058178 4793 scope.go:117] "RemoveContainer" containerID="482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.083827 4793 scope.go:117] "RemoveContainer" containerID="7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.096785 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwn45"] Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.110204 4793 scope.go:117] "RemoveContainer" containerID="6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.148465 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cwn45"] Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.183507 4793 scope.go:117] "RemoveContainer" containerID="482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099" Jan 30 14:11:00 crc kubenswrapper[4793]: E0130 14:11:00.183891 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099\": container with ID starting with 482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099 not found: ID does not exist" containerID="482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.183919 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099"} err="failed to get container status \"482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099\": rpc error: code = NotFound desc = could not find container \"482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099\": container with ID starting with 482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099 not found: ID does not exist" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.183947 4793 scope.go:117] "RemoveContainer" containerID="7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132" Jan 30 14:11:00 crc kubenswrapper[4793]: E0130 14:11:00.184212 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132\": container with ID starting with 7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132 not found: ID does not exist" containerID="7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.184245 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132"} err="failed to get container status \"7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132\": rpc error: code = NotFound desc = could not find container \"7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132\": container with ID starting with 7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132 not found: ID does not exist" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.184259 4793 scope.go:117] "RemoveContainer" containerID="6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b" Jan 30 14:11:00 crc kubenswrapper[4793]: E0130 14:11:00.184457 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b\": container with ID starting with 6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b not found: ID does not exist" containerID="6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.184477 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b"} err="failed to get container status \"6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b\": rpc error: code = NotFound desc = could not find container \"6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b\": container with ID starting with 6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b not found: ID does not exist" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.409511 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" path="/var/lib/kubelet/pods/ea9c91d0-f921-4b9e-a37b-9d50419d506e/volumes" Jan 30 14:11:01 crc kubenswrapper[4793]: I0130 14:11:01.940105 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="rabbitmq" containerID="cri-o://ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa" gracePeriod=604795 Jan 30 14:11:03 crc kubenswrapper[4793]: I0130 14:11:03.398551 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:11:03 crc kubenswrapper[4793]: E0130 14:11:03.398869 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:11:03 crc kubenswrapper[4793]: I0130 14:11:03.455256 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="rabbitmq" containerID="cri-o://b985352acd3221df1cd541d3576c66285b247ac814efbffa0d9afc52e1848265" gracePeriod=604796 Jan 30 14:11:06 crc kubenswrapper[4793]: I0130 14:11:06.078451 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 30 14:11:06 crc kubenswrapper[4793]: I0130 14:11:06.216206 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.541677 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.697768 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-config-data\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.697842 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-tls\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.697987 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ab4371b-53c0-41a1-9561-0c02f936c7a7-pod-info\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698169 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-erlang-cookie\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698305 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ab4371b-53c0-41a1-9561-0c02f936c7a7-erlang-cookie-secret\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698333 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-plugins-conf\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698383 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-confd\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698450 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-plugins\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698493 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698544 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-server-conf\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.699145 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rck4w\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-kube-api-access-rck4w\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.700455 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.705675 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.706136 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0ab4371b-53c0-41a1-9561-0c02f936c7a7-pod-info" (OuterVolumeSpecName: "pod-info") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.706159 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.706422 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.711116 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.722333 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-kube-api-access-rck4w" (OuterVolumeSpecName: "kube-api-access-rck4w") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "kube-api-access-rck4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.730278 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab4371b-53c0-41a1-9561-0c02f936c7a7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.787557 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-config-data" (OuterVolumeSpecName: "config-data") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803745 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803793 4793 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ab4371b-53c0-41a1-9561-0c02f936c7a7-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803805 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803821 4793 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ab4371b-53c0-41a1-9561-0c02f936c7a7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803831 4793 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803842 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803879 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803892 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rck4w\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-kube-api-access-rck4w\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803904 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.828367 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.828666 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-server-conf" (OuterVolumeSpecName: "server-conf") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.893178 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.905885 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.905922 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.905935 4793 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.159013 4793 generic.go:334] "Generic (PLEG): container finished" podID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerID="ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa" exitCode=0 Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.159370 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ab4371b-53c0-41a1-9561-0c02f936c7a7","Type":"ContainerDied","Data":"ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa"} Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.159406 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ab4371b-53c0-41a1-9561-0c02f936c7a7","Type":"ContainerDied","Data":"0efe8f891a233c8e5ac4fe6bb1b425a66ddbc8f34f8412134d77a42240eb7c39"} Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.159428 4793 scope.go:117] "RemoveContainer" containerID="ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.159581 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.230847 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.242861 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.248003 4793 scope.go:117] "RemoveContainer" containerID="06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.293388 4793 scope.go:117] "RemoveContainer" containerID="ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa" Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.299796 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa\": container with ID starting with ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa not found: ID does not exist" containerID="ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.299850 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa"} err="failed to get container status \"ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa\": rpc error: code = NotFound desc = could not find container \"ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa\": container with ID starting with ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa not found: ID does not exist" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.299884 4793 scope.go:117] "RemoveContainer" containerID="06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.311596 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.312222 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="extract-utilities" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.312299 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="extract-utilities" Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.312400 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="setup-container" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.312482 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="setup-container" Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.312556 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="registry-server" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.312623 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="registry-server" Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.312685 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="extract-content" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.312741 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="extract-content" Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.312808 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="rabbitmq" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.312865 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="rabbitmq" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.313124 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="rabbitmq" Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.311662 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48\": container with ID starting with 06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48 not found: ID does not exist" containerID="06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.313248 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48"} err="failed to get container status \"06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48\": rpc error: code = NotFound desc = could not find container \"06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48\": container with ID starting with 06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48 not found: ID does not exist" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.313216 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="registry-server" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.314466 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.320687 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4mm4r" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.324497 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.324540 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.324597 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.324634 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.324756 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.325035 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.348030 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414000 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ffc0461-9589-45f5-a656-85cc01de58ed-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414075 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414098 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414141 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414211 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414241 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ffc0461-9589-45f5-a656-85cc01de58ed-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414266 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414291 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414352 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqzqg\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-kube-api-access-vqzqg\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414386 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414408 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516189 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ffc0461-9589-45f5-a656-85cc01de58ed-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516234 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516259 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516315 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516397 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516425 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ffc0461-9589-45f5-a656-85cc01de58ed-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516445 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516469 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516560 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqzqg\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-kube-api-access-vqzqg\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516631 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516653 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.517446 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.517522 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.517630 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.517882 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.517932 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.518474 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.525801 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ffc0461-9589-45f5-a656-85cc01de58ed-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.528718 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.533557 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqzqg\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-kube-api-access-vqzqg\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.534255 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ffc0461-9589-45f5-a656-85cc01de58ed-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.540039 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.602304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.636842 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.956822 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-swg98"] Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.958577 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.983005 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.018905 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-swg98"] Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035469 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035578 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035610 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035705 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-config\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035732 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035757 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035862 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqz2m\" (UniqueName: \"kubernetes.io/projected/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-kube-api-access-jqz2m\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.137784 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.137854 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.137877 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.137914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-config\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.137935 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.137956 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.138026 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqz2m\" (UniqueName: \"kubernetes.io/projected/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-kube-api-access-jqz2m\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.139017 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.139524 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.140009 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.140539 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.140849 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.141030 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-config\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.167013 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqz2m\" (UniqueName: \"kubernetes.io/projected/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-kube-api-access-jqz2m\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.196732 4793 generic.go:334] "Generic (PLEG): container finished" podID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerID="b985352acd3221df1cd541d3576c66285b247ac814efbffa0d9afc52e1848265" exitCode=0 Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.196884 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a4cd276-23a5-4acb-bb1b-41470a11c945","Type":"ContainerDied","Data":"b985352acd3221df1cd541d3576c66285b247ac814efbffa0d9afc52e1848265"} Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.290078 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.304762 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.439357 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" path="/var/lib/kubelet/pods/0ab4371b-53c0-41a1-9561-0c02f936c7a7/volumes" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.448635 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455496 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-plugins-conf\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455589 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-erlang-cookie\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455633 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a4cd276-23a5-4acb-bb1b-41470a11c945-erlang-cookie-secret\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455699 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-tls\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455719 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-plugins\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455735 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-server-conf\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455756 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-confd\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455775 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f59v5\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-kube-api-access-f59v5\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455805 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a4cd276-23a5-4acb-bb1b-41470a11c945-pod-info\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455909 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455953 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-config-data\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.465665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.477368 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.493632 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-kube-api-access-f59v5" (OuterVolumeSpecName: "kube-api-access-f59v5") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "kube-api-access-f59v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.495600 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4cd276-23a5-4acb-bb1b-41470a11c945-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.498676 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.499769 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.505699 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.508753 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5a4cd276-23a5-4acb-bb1b-41470a11c945-pod-info" (OuterVolumeSpecName: "pod-info") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559003 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559036 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559048 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f59v5\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-kube-api-access-f59v5\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559084 4793 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a4cd276-23a5-4acb-bb1b-41470a11c945-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559112 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559121 4793 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559130 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559145 4793 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a4cd276-23a5-4acb-bb1b-41470a11c945-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.632300 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-config-data" (OuterVolumeSpecName: "config-data") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.656164 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-server-conf" (OuterVolumeSpecName: "server-conf") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.660731 4793 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.660900 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.695021 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.746645 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.763487 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.763528 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.030015 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-swg98"] Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.221434 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a4cd276-23a5-4acb-bb1b-41470a11c945","Type":"ContainerDied","Data":"49420acdae0565905cd8f73dba3384bd4f0c8ed41985335ead11f16b3b125159"} Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.221482 4793 scope.go:117] "RemoveContainer" containerID="b985352acd3221df1cd541d3576c66285b247ac814efbffa0d9afc52e1848265" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.221636 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.226590 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ffc0461-9589-45f5-a656-85cc01de58ed","Type":"ContainerStarted","Data":"b126c034f300df436262ee7b232720f4860c063847d40c54826a736a9bb22ffb"} Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.227617 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" event={"ID":"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b","Type":"ContainerStarted","Data":"ad26c96752807da90d4235406116a1597523e7ece85d333a17d15f0f529f2705"} Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.273172 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.276691 4793 scope.go:117] "RemoveContainer" containerID="d616170562eeb4ba00ef47dc4bae339cb080a28d5310b1ec237e9ad217b38991" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.283320 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.298835 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:11:11 crc kubenswrapper[4793]: E0130 14:11:11.299290 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="rabbitmq" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.299308 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="rabbitmq" Jan 30 14:11:11 crc kubenswrapper[4793]: E0130 14:11:11.299336 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="setup-container" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.299343 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="setup-container" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.299516 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="rabbitmq" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.300456 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.312528 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.317877 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.318131 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.318242 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dkqxx" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.318397 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.318498 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.318593 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.318526 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374401 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374436 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374491 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374514 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwkd5\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-kube-api-access-jwkd5\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374545 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374563 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374621 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374646 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374675 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b0247ba-adfd-4195-bf23-91478001fed7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374691 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b0247ba-adfd-4195-bf23-91478001fed7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374713 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478608 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478656 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478741 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478769 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwkd5\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-kube-api-access-jwkd5\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478818 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478844 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478901 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478956 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.479002 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b0247ba-adfd-4195-bf23-91478001fed7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.479025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b0247ba-adfd-4195-bf23-91478001fed7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.479094 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.479155 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.479321 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.479715 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.480161 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.480905 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.486138 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.486808 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b0247ba-adfd-4195-bf23-91478001fed7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.487619 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.488551 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.501625 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b0247ba-adfd-4195-bf23-91478001fed7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.531962 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwkd5\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-kube-api-access-jwkd5\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.642400 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.719923 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:12 crc kubenswrapper[4793]: I0130 14:11:12.240951 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ffc0461-9589-45f5-a656-85cc01de58ed","Type":"ContainerStarted","Data":"b78b95b51eca377e41ebaa0a23cb9ab290a9ef1905c2ed2332706169e67ce242"} Jan 30 14:11:12 crc kubenswrapper[4793]: I0130 14:11:12.244476 4793 generic.go:334] "Generic (PLEG): container finished" podID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerID="942b9a91649d88b76815dae7ef5ceda6f5ba7882083b88d098feb75a679ceddd" exitCode=0 Jan 30 14:11:12 crc kubenswrapper[4793]: I0130 14:11:12.244516 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" event={"ID":"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b","Type":"ContainerDied","Data":"942b9a91649d88b76815dae7ef5ceda6f5ba7882083b88d098feb75a679ceddd"} Jan 30 14:11:12 crc kubenswrapper[4793]: I0130 14:11:12.423744 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" path="/var/lib/kubelet/pods/5a4cd276-23a5-4acb-bb1b-41470a11c945/volumes" Jan 30 14:11:12 crc kubenswrapper[4793]: I0130 14:11:12.435857 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:11:13 crc kubenswrapper[4793]: I0130 14:11:13.255073 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3b0247ba-adfd-4195-bf23-91478001fed7","Type":"ContainerStarted","Data":"c7c8132d652f1c852c160648dbfd496d7ed534aa237703b5ad385eb046c3abbd"} Jan 30 14:11:13 crc kubenswrapper[4793]: I0130 14:11:13.258855 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" event={"ID":"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b","Type":"ContainerStarted","Data":"2a32dca8cb61b9289690294b5f09f596754cf5c4a8d30bb00d21441bb933964e"} Jan 30 14:11:13 crc kubenswrapper[4793]: I0130 14:11:13.258909 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:13 crc kubenswrapper[4793]: I0130 14:11:13.285987 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" podStartSLOduration=4.28596617 podStartE2EDuration="4.28596617s" podCreationTimestamp="2026-01-30 14:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:11:13.274108117 +0000 UTC m=+1683.975456618" watchObservedRunningTime="2026-01-30 14:11:13.28596617 +0000 UTC m=+1683.987314671" Jan 30 14:11:14 crc kubenswrapper[4793]: I0130 14:11:14.272191 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3b0247ba-adfd-4195-bf23-91478001fed7","Type":"ContainerStarted","Data":"8cfc8cd39798a1f8a2ba8f639e157a037ab2e66ed79db4999cad2e83c92d49c8"} Jan 30 14:11:14 crc kubenswrapper[4793]: I0130 14:11:14.398368 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:11:14 crc kubenswrapper[4793]: E0130 14:11:14.398642 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.307466 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.387968 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-cxkd2"] Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.388272 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerName="dnsmasq-dns" containerID="cri-o://1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4" gracePeriod=10 Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.623670 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-5bm62"] Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.626377 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.671350 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-5bm62"] Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.772227 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-config\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.772288 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swswb\" (UniqueName: \"kubernetes.io/projected/b3e8eb28-c303-409b-a89b-b273b2f56fff-kube-api-access-swswb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.772345 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.772550 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.772718 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.772834 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.773017 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875017 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-config\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875596 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swswb\" (UniqueName: \"kubernetes.io/projected/b3e8eb28-c303-409b-a89b-b273b2f56fff-kube-api-access-swswb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875639 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875710 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875795 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875855 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.876201 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-config\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.876800 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.876875 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.877593 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.877795 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.878110 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.898263 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swswb\" (UniqueName: \"kubernetes.io/projected/b3e8eb28-c303-409b-a89b-b273b2f56fff-kube-api-access-swswb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.956013 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.109838 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.181521 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-sb\") pod \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.181618 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-nb\") pod \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.181872 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-swift-storage-0\") pod \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.181991 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-config\") pod \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.182033 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-svc\") pod \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.182087 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wjfh\" (UniqueName: \"kubernetes.io/projected/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-kube-api-access-9wjfh\") pod \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.187755 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-kube-api-access-9wjfh" (OuterVolumeSpecName: "kube-api-access-9wjfh") pod "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" (UID: "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1"). InnerVolumeSpecName "kube-api-access-9wjfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.283915 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wjfh\" (UniqueName: \"kubernetes.io/projected/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-kube-api-access-9wjfh\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.338144 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-5bm62"] Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.342118 4793 generic.go:334] "Generic (PLEG): container finished" podID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerID="1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4" exitCode=0 Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.342238 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" event={"ID":"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1","Type":"ContainerDied","Data":"1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4"} Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.342312 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" event={"ID":"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1","Type":"ContainerDied","Data":"78fb92af330aba5ae85ee09e8c30d31dd6612ee663286c5bea03ea04be9abef3"} Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.342374 4793 scope.go:117] "RemoveContainer" containerID="1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.343498 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.361577 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" (UID: "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.372510 4793 scope.go:117] "RemoveContainer" containerID="0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.373197 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" (UID: "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.377663 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" (UID: "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.386026 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.386103 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.386118 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.399477 4793 scope.go:117] "RemoveContainer" containerID="1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4" Jan 30 14:11:21 crc kubenswrapper[4793]: E0130 14:11:21.400584 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4\": container with ID starting with 1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4 not found: ID does not exist" containerID="1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.400620 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4"} err="failed to get container status \"1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4\": rpc error: code = NotFound desc = could not find container \"1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4\": container with ID starting with 1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4 not found: ID does not exist" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.400643 4793 scope.go:117] "RemoveContainer" containerID="0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889" Jan 30 14:11:21 crc kubenswrapper[4793]: E0130 14:11:21.401765 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889\": container with ID starting with 0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889 not found: ID does not exist" containerID="0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.401808 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889"} err="failed to get container status \"0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889\": rpc error: code = NotFound desc = could not find container \"0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889\": container with ID starting with 0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889 not found: ID does not exist" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.405110 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-config" (OuterVolumeSpecName: "config") pod "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" (UID: "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.408508 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" (UID: "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.487580 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.488015 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.703966 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-cxkd2"] Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.715110 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-cxkd2"] Jan 30 14:11:22 crc kubenswrapper[4793]: I0130 14:11:22.354472 4793 generic.go:334] "Generic (PLEG): container finished" podID="b3e8eb28-c303-409b-a89b-b273b2f56fff" containerID="edaded44b57086b3e7c84221f1f47f36c4cc2427d1e444f44e5430172c9e82d2" exitCode=0 Jan 30 14:11:22 crc kubenswrapper[4793]: I0130 14:11:22.354527 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" event={"ID":"b3e8eb28-c303-409b-a89b-b273b2f56fff","Type":"ContainerDied","Data":"edaded44b57086b3e7c84221f1f47f36c4cc2427d1e444f44e5430172c9e82d2"} Jan 30 14:11:22 crc kubenswrapper[4793]: I0130 14:11:22.354563 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" event={"ID":"b3e8eb28-c303-409b-a89b-b273b2f56fff","Type":"ContainerStarted","Data":"8dee820bbac36fa286cdb5cc61dcdf27fa6218771c3044009cf48d9ef23c5b9b"} Jan 30 14:11:22 crc kubenswrapper[4793]: I0130 14:11:22.414966 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" path="/var/lib/kubelet/pods/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1/volumes" Jan 30 14:11:23 crc kubenswrapper[4793]: I0130 14:11:23.364396 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" event={"ID":"b3e8eb28-c303-409b-a89b-b273b2f56fff","Type":"ContainerStarted","Data":"73d9105cd08f1683fc3700f4a2cacf52c2e7d1cdf04ec141f1fe5704fbdea46a"} Jan 30 14:11:23 crc kubenswrapper[4793]: I0130 14:11:23.364734 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:23 crc kubenswrapper[4793]: I0130 14:11:23.390254 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" podStartSLOduration=3.390231331 podStartE2EDuration="3.390231331s" podCreationTimestamp="2026-01-30 14:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:11:23.381668659 +0000 UTC m=+1694.083017170" watchObservedRunningTime="2026-01-30 14:11:23.390231331 +0000 UTC m=+1694.091579822" Jan 30 14:11:29 crc kubenswrapper[4793]: I0130 14:11:29.399674 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:11:29 crc kubenswrapper[4793]: E0130 14:11:29.400566 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:11:30 crc kubenswrapper[4793]: I0130 14:11:30.959287 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.030888 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-swg98"] Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.031400 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerName="dnsmasq-dns" containerID="cri-o://2a32dca8cb61b9289690294b5f09f596754cf5c4a8d30bb00d21441bb933964e" gracePeriod=10 Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.441113 4793 generic.go:334] "Generic (PLEG): container finished" podID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerID="2a32dca8cb61b9289690294b5f09f596754cf5c4a8d30bb00d21441bb933964e" exitCode=0 Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.441185 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" event={"ID":"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b","Type":"ContainerDied","Data":"2a32dca8cb61b9289690294b5f09f596754cf5c4a8d30bb00d21441bb933964e"} Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.710807 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.800411 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-openstack-edpm-ipam\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.800664 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqz2m\" (UniqueName: \"kubernetes.io/projected/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-kube-api-access-jqz2m\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.800761 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-sb\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.801097 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-nb\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.801191 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-config\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.801330 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-swift-storage-0\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.801411 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-svc\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.825630 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-kube-api-access-jqz2m" (OuterVolumeSpecName: "kube-api-access-jqz2m") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "kube-api-access-jqz2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.850385 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.850455 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.853545 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.854884 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.863959 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-config" (OuterVolumeSpecName: "config") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.876834 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.903965 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.904001 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.904015 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.904027 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.904037 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.904077 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.904086 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqz2m\" (UniqueName: \"kubernetes.io/projected/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-kube-api-access-jqz2m\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:32 crc kubenswrapper[4793]: I0130 14:11:32.451441 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" event={"ID":"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b","Type":"ContainerDied","Data":"ad26c96752807da90d4235406116a1597523e7ece85d333a17d15f0f529f2705"} Jan 30 14:11:32 crc kubenswrapper[4793]: I0130 14:11:32.451767 4793 scope.go:117] "RemoveContainer" containerID="2a32dca8cb61b9289690294b5f09f596754cf5c4a8d30bb00d21441bb933964e" Jan 30 14:11:32 crc kubenswrapper[4793]: I0130 14:11:32.451493 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:32 crc kubenswrapper[4793]: I0130 14:11:32.483436 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-swg98"] Jan 30 14:11:32 crc kubenswrapper[4793]: I0130 14:11:32.489695 4793 scope.go:117] "RemoveContainer" containerID="942b9a91649d88b76815dae7ef5ceda6f5ba7882083b88d098feb75a679ceddd" Jan 30 14:11:32 crc kubenswrapper[4793]: I0130 14:11:32.493992 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-swg98"] Jan 30 14:11:34 crc kubenswrapper[4793]: I0130 14:11:34.411745 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" path="/var/lib/kubelet/pods/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b/volumes" Jan 30 14:11:40 crc kubenswrapper[4793]: I0130 14:11:40.408901 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:11:40 crc kubenswrapper[4793]: E0130 14:11:40.409721 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:11:44 crc kubenswrapper[4793]: I0130 14:11:44.930181 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-vsdkv" podUID="fd03c93b-a2a7-4a2f-9292-29c4e7fe9640" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:11:45 crc kubenswrapper[4793]: I0130 14:11:45.894801 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-g9hvr" podUID="519ea47c-0d76-44cb-af34-823c71e508c9" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.659232 4793 generic.go:334] "Generic (PLEG): container finished" podID="7ffc0461-9589-45f5-a656-85cc01de58ed" containerID="b78b95b51eca377e41ebaa0a23cb9ab290a9ef1905c2ed2332706169e67ce242" exitCode=0 Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.659334 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ffc0461-9589-45f5-a656-85cc01de58ed","Type":"ContainerDied","Data":"b78b95b51eca377e41ebaa0a23cb9ab290a9ef1905c2ed2332706169e67ce242"} Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.662092 4793 generic.go:334] "Generic (PLEG): container finished" podID="3b0247ba-adfd-4195-bf23-91478001fed7" containerID="8cfc8cd39798a1f8a2ba8f639e157a037ab2e66ed79db4999cad2e83c92d49c8" exitCode=0 Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.662125 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3b0247ba-adfd-4195-bf23-91478001fed7","Type":"ContainerDied","Data":"8cfc8cd39798a1f8a2ba8f639e157a037ab2e66ed79db4999cad2e83c92d49c8"} Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.818185 4793 scope.go:117] "RemoveContainer" containerID="915b433bd8f492e1285f7731f190606a27443ef65efaea3a89e0a1143cdf8065" Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.898664 4793 scope.go:117] "RemoveContainer" containerID="0a03fc4fb64bbc55f9e83e2df3c5192020b95575ac83335c13e52269467122b8" Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.953338 4793 scope.go:117] "RemoveContainer" containerID="d6ac5e8cc6b63af60a4456f31c6bd2647365686983f5e5af22d83b768d333382" Jan 30 14:11:47 crc kubenswrapper[4793]: I0130 14:11:47.674630 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3b0247ba-adfd-4195-bf23-91478001fed7","Type":"ContainerStarted","Data":"4ad631a244ea3a62ebbde0b0673b298753063f8dfc7ec291e85b02e61c0cf71b"} Jan 30 14:11:47 crc kubenswrapper[4793]: I0130 14:11:47.677130 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ffc0461-9589-45f5-a656-85cc01de58ed","Type":"ContainerStarted","Data":"c0d7bf6ddb176fb2e5c090a7298d794e3f968020a1664efaef051a3ba34d4fe8"} Jan 30 14:11:47 crc kubenswrapper[4793]: I0130 14:11:47.678275 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 14:11:47 crc kubenswrapper[4793]: I0130 14:11:47.716645 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.716624633 podStartE2EDuration="38.716624633s" podCreationTimestamp="2026-01-30 14:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:11:47.711756632 +0000 UTC m=+1718.413105133" watchObservedRunningTime="2026-01-30 14:11:47.716624633 +0000 UTC m=+1718.417973124" Jan 30 14:11:48 crc kubenswrapper[4793]: I0130 14:11:48.684114 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:48 crc kubenswrapper[4793]: I0130 14:11:48.711512 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.711493226 podStartE2EDuration="37.711493226s" podCreationTimestamp="2026-01-30 14:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:11:48.702401782 +0000 UTC m=+1719.403750273" watchObservedRunningTime="2026-01-30 14:11:48.711493226 +0000 UTC m=+1719.412841717" Jan 30 14:11:52 crc kubenswrapper[4793]: I0130 14:11:52.398856 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:11:52 crc kubenswrapper[4793]: E0130 14:11:52.399256 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.174020 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8"] Jan 30 14:11:59 crc kubenswrapper[4793]: E0130 14:11:59.177426 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerName="dnsmasq-dns" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.177675 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerName="dnsmasq-dns" Jan 30 14:11:59 crc kubenswrapper[4793]: E0130 14:11:59.177973 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerName="init" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.178300 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerName="init" Jan 30 14:11:59 crc kubenswrapper[4793]: E0130 14:11:59.178373 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerName="init" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.178429 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerName="init" Jan 30 14:11:59 crc kubenswrapper[4793]: E0130 14:11:59.178501 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerName="dnsmasq-dns" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.178572 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerName="dnsmasq-dns" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.178882 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerName="dnsmasq-dns" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.178991 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerName="dnsmasq-dns" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.179795 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.183121 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.183437 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.183844 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.185625 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.186625 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8"] Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.267380 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2gj\" (UniqueName: \"kubernetes.io/projected/03127c65-edbf-41bd-9543-35ae0eddbff6-kube-api-access-dq2gj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.267439 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.267700 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.267820 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.369896 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq2gj\" (UniqueName: \"kubernetes.io/projected/03127c65-edbf-41bd-9543-35ae0eddbff6-kube-api-access-dq2gj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.369962 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.370023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.370085 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.375778 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.375977 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.377457 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.399456 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq2gj\" (UniqueName: \"kubernetes.io/projected/03127c65-edbf-41bd-9543-35ae0eddbff6-kube-api-access-dq2gj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.530256 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.640620 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7ffc0461-9589-45f5-a656-85cc01de58ed" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.207:5671: connect: connection refused" Jan 30 14:12:00 crc kubenswrapper[4793]: W0130 14:12:00.894290 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03127c65_edbf_41bd_9543_35ae0eddbff6.slice/crio-75e63c4f5c8ceec53f4ba2de10b538c9e4c3cf56c2f1d9cb3c30a7c4c35acca3 WatchSource:0}: Error finding container 75e63c4f5c8ceec53f4ba2de10b538c9e4c3cf56c2f1d9cb3c30a7c4c35acca3: Status 404 returned error can't find the container with id 75e63c4f5c8ceec53f4ba2de10b538c9e4c3cf56c2f1d9cb3c30a7c4c35acca3 Jan 30 14:12:00 crc kubenswrapper[4793]: I0130 14:12:00.905001 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8"] Jan 30 14:12:01 crc kubenswrapper[4793]: I0130 14:12:01.724462 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="3b0247ba-adfd-4195-bf23-91478001fed7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.209:5671: connect: connection refused" Jan 30 14:12:01 crc kubenswrapper[4793]: I0130 14:12:01.795413 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" event={"ID":"03127c65-edbf-41bd-9543-35ae0eddbff6","Type":"ContainerStarted","Data":"75e63c4f5c8ceec53f4ba2de10b538c9e4c3cf56c2f1d9cb3c30a7c4c35acca3"} Jan 30 14:12:03 crc kubenswrapper[4793]: I0130 14:12:03.398661 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:12:03 crc kubenswrapper[4793]: E0130 14:12:03.399017 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:12:09 crc kubenswrapper[4793]: I0130 14:12:09.639882 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 14:12:11 crc kubenswrapper[4793]: I0130 14:12:11.723232 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:12:14 crc kubenswrapper[4793]: I0130 14:12:14.401084 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:12:14 crc kubenswrapper[4793]: E0130 14:12:14.401647 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:12:16 crc kubenswrapper[4793]: E0130 14:12:16.092321 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 30 14:12:16 crc kubenswrapper[4793]: E0130 14:12:16.092490 4793 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 14:12:16 crc kubenswrapper[4793]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 30 14:12:16 crc kubenswrapper[4793]: - hosts: all Jan 30 14:12:16 crc kubenswrapper[4793]: strategy: linear Jan 30 14:12:16 crc kubenswrapper[4793]: tasks: Jan 30 14:12:16 crc kubenswrapper[4793]: - name: Enable podified-repos Jan 30 14:12:16 crc kubenswrapper[4793]: become: true Jan 30 14:12:16 crc kubenswrapper[4793]: ansible.builtin.shell: | Jan 30 14:12:16 crc kubenswrapper[4793]: set -euxo pipefail Jan 30 14:12:16 crc kubenswrapper[4793]: pushd /var/tmp Jan 30 14:12:16 crc kubenswrapper[4793]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 30 14:12:16 crc kubenswrapper[4793]: pushd repo-setup-main Jan 30 14:12:16 crc kubenswrapper[4793]: python3 -m venv ./venv Jan 30 14:12:16 crc kubenswrapper[4793]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 30 14:12:16 crc kubenswrapper[4793]: ./venv/bin/repo-setup current-podified -b antelope Jan 30 14:12:16 crc kubenswrapper[4793]: popd Jan 30 14:12:16 crc kubenswrapper[4793]: rm -rf repo-setup-main Jan 30 14:12:16 crc kubenswrapper[4793]: Jan 30 14:12:16 crc kubenswrapper[4793]: Jan 30 14:12:16 crc kubenswrapper[4793]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 30 14:12:16 crc kubenswrapper[4793]: edpm_override_hosts: openstack-edpm-ipam Jan 30 14:12:16 crc kubenswrapper[4793]: edpm_service_type: repo-setup Jan 30 14:12:16 crc kubenswrapper[4793]: Jan 30 14:12:16 crc kubenswrapper[4793]: Jan 30 14:12:16 crc kubenswrapper[4793]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dq2gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8_openstack(03127c65-edbf-41bd-9543-35ae0eddbff6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 30 14:12:16 crc kubenswrapper[4793]: > logger="UnhandledError" Jan 30 14:12:16 crc kubenswrapper[4793]: E0130 14:12:16.093600 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" podUID="03127c65-edbf-41bd-9543-35ae0eddbff6" Jan 30 14:12:16 crc kubenswrapper[4793]: E0130 14:12:16.953546 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" podUID="03127c65-edbf-41bd-9543-35ae0eddbff6" Jan 30 14:12:25 crc kubenswrapper[4793]: I0130 14:12:25.398967 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:12:25 crc kubenswrapper[4793]: E0130 14:12:25.400111 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:12:32 crc kubenswrapper[4793]: I0130 14:12:32.212759 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:12:33 crc kubenswrapper[4793]: I0130 14:12:33.115033 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" event={"ID":"03127c65-edbf-41bd-9543-35ae0eddbff6","Type":"ContainerStarted","Data":"7b11af670b73401f4802a9bea647881a00e8ba16559b8a2c4149777c928f19f1"} Jan 30 14:12:33 crc kubenswrapper[4793]: I0130 14:12:33.142702 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" podStartSLOduration=2.829010543 podStartE2EDuration="34.142685316s" podCreationTimestamp="2026-01-30 14:11:59 +0000 UTC" firstStartedPulling="2026-01-30 14:12:00.896807838 +0000 UTC m=+1731.598156329" lastFinishedPulling="2026-01-30 14:12:32.210482611 +0000 UTC m=+1762.911831102" observedRunningTime="2026-01-30 14:12:33.136612297 +0000 UTC m=+1763.837960788" watchObservedRunningTime="2026-01-30 14:12:33.142685316 +0000 UTC m=+1763.844033797" Jan 30 14:12:37 crc kubenswrapper[4793]: I0130 14:12:37.398596 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:12:37 crc kubenswrapper[4793]: E0130 14:12:37.399437 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:12:45 crc kubenswrapper[4793]: I0130 14:12:45.249658 4793 generic.go:334] "Generic (PLEG): container finished" podID="03127c65-edbf-41bd-9543-35ae0eddbff6" containerID="7b11af670b73401f4802a9bea647881a00e8ba16559b8a2c4149777c928f19f1" exitCode=0 Jan 30 14:12:45 crc kubenswrapper[4793]: I0130 14:12:45.249755 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" event={"ID":"03127c65-edbf-41bd-9543-35ae0eddbff6","Type":"ContainerDied","Data":"7b11af670b73401f4802a9bea647881a00e8ba16559b8a2c4149777c928f19f1"} Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.684104 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.802316 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-ssh-key-openstack-edpm-ipam\") pod \"03127c65-edbf-41bd-9543-35ae0eddbff6\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.802458 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-repo-setup-combined-ca-bundle\") pod \"03127c65-edbf-41bd-9543-35ae0eddbff6\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.802618 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-inventory\") pod \"03127c65-edbf-41bd-9543-35ae0eddbff6\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.802711 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq2gj\" (UniqueName: \"kubernetes.io/projected/03127c65-edbf-41bd-9543-35ae0eddbff6-kube-api-access-dq2gj\") pod \"03127c65-edbf-41bd-9543-35ae0eddbff6\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.811564 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03127c65-edbf-41bd-9543-35ae0eddbff6-kube-api-access-dq2gj" (OuterVolumeSpecName: "kube-api-access-dq2gj") pod "03127c65-edbf-41bd-9543-35ae0eddbff6" (UID: "03127c65-edbf-41bd-9543-35ae0eddbff6"). InnerVolumeSpecName "kube-api-access-dq2gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.811819 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "03127c65-edbf-41bd-9543-35ae0eddbff6" (UID: "03127c65-edbf-41bd-9543-35ae0eddbff6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.832605 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "03127c65-edbf-41bd-9543-35ae0eddbff6" (UID: "03127c65-edbf-41bd-9543-35ae0eddbff6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.837247 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-inventory" (OuterVolumeSpecName: "inventory") pod "03127c65-edbf-41bd-9543-35ae0eddbff6" (UID: "03127c65-edbf-41bd-9543-35ae0eddbff6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.904534 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.904569 4793 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.904580 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.904589 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq2gj\" (UniqueName: \"kubernetes.io/projected/03127c65-edbf-41bd-9543-35ae0eddbff6-kube-api-access-dq2gj\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.270070 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" event={"ID":"03127c65-edbf-41bd-9543-35ae0eddbff6","Type":"ContainerDied","Data":"75e63c4f5c8ceec53f4ba2de10b538c9e4c3cf56c2f1d9cb3c30a7c4c35acca3"} Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.270113 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75e63c4f5c8ceec53f4ba2de10b538c9e4c3cf56c2f1d9cb3c30a7c4c35acca3" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.270173 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.408517 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5"] Jan 30 14:12:47 crc kubenswrapper[4793]: E0130 14:12:47.409192 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03127c65-edbf-41bd-9543-35ae0eddbff6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.409293 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="03127c65-edbf-41bd-9543-35ae0eddbff6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.409565 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="03127c65-edbf-41bd-9543-35ae0eddbff6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.410296 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.412439 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.413253 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.413417 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.414239 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.420157 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5"] Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.520774 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x755c\" (UniqueName: \"kubernetes.io/projected/b89c70f6-dabd-4984-8f21-235a9ab2f307-kube-api-access-x755c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.520900 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.521150 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.622770 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.622871 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x755c\" (UniqueName: \"kubernetes.io/projected/b89c70f6-dabd-4984-8f21-235a9ab2f307-kube-api-access-x755c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.622926 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.626457 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.629601 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.647100 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x755c\" (UniqueName: \"kubernetes.io/projected/b89c70f6-dabd-4984-8f21-235a9ab2f307-kube-api-access-x755c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.732784 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.758518 4793 scope.go:117] "RemoveContainer" containerID="c0abfc20236991093d7e8e2afcdd95243ff40e4122ba5c47744049c4a654a438" Jan 30 14:12:48 crc kubenswrapper[4793]: W0130 14:12:48.336351 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb89c70f6_dabd_4984_8f21_235a9ab2f307.slice/crio-2a49ceb4b7dbf82deecb11fb0c020251ebb2772505ff22b814869fb7dfd8f913 WatchSource:0}: Error finding container 2a49ceb4b7dbf82deecb11fb0c020251ebb2772505ff22b814869fb7dfd8f913: Status 404 returned error can't find the container with id 2a49ceb4b7dbf82deecb11fb0c020251ebb2772505ff22b814869fb7dfd8f913 Jan 30 14:12:48 crc kubenswrapper[4793]: I0130 14:12:48.336874 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5"] Jan 30 14:12:48 crc kubenswrapper[4793]: I0130 14:12:48.398635 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:12:48 crc kubenswrapper[4793]: E0130 14:12:48.398886 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:12:49 crc kubenswrapper[4793]: I0130 14:12:49.294363 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" event={"ID":"b89c70f6-dabd-4984-8f21-235a9ab2f307","Type":"ContainerStarted","Data":"2a49ceb4b7dbf82deecb11fb0c020251ebb2772505ff22b814869fb7dfd8f913"} Jan 30 14:12:50 crc kubenswrapper[4793]: I0130 14:12:50.305692 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" event={"ID":"b89c70f6-dabd-4984-8f21-235a9ab2f307","Type":"ContainerStarted","Data":"d115148b62a0b6bbfe89b6c2eecac629107d624be74203eefd689a847c0d0cc0"} Jan 30 14:12:50 crc kubenswrapper[4793]: I0130 14:12:50.324896 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" podStartSLOduration=2.449799429 podStartE2EDuration="3.324878225s" podCreationTimestamp="2026-01-30 14:12:47 +0000 UTC" firstStartedPulling="2026-01-30 14:12:48.33928305 +0000 UTC m=+1779.040631541" lastFinishedPulling="2026-01-30 14:12:49.214361846 +0000 UTC m=+1779.915710337" observedRunningTime="2026-01-30 14:12:50.323574173 +0000 UTC m=+1781.024922664" watchObservedRunningTime="2026-01-30 14:12:50.324878225 +0000 UTC m=+1781.026226716" Jan 30 14:12:52 crc kubenswrapper[4793]: I0130 14:12:52.324720 4793 generic.go:334] "Generic (PLEG): container finished" podID="b89c70f6-dabd-4984-8f21-235a9ab2f307" containerID="d115148b62a0b6bbfe89b6c2eecac629107d624be74203eefd689a847c0d0cc0" exitCode=0 Jan 30 14:12:52 crc kubenswrapper[4793]: I0130 14:12:52.324844 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" event={"ID":"b89c70f6-dabd-4984-8f21-235a9ab2f307","Type":"ContainerDied","Data":"d115148b62a0b6bbfe89b6c2eecac629107d624be74203eefd689a847c0d0cc0"} Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.787180 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.852300 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-ssh-key-openstack-edpm-ipam\") pod \"b89c70f6-dabd-4984-8f21-235a9ab2f307\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.852389 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x755c\" (UniqueName: \"kubernetes.io/projected/b89c70f6-dabd-4984-8f21-235a9ab2f307-kube-api-access-x755c\") pod \"b89c70f6-dabd-4984-8f21-235a9ab2f307\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.852489 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-inventory\") pod \"b89c70f6-dabd-4984-8f21-235a9ab2f307\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.858392 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b89c70f6-dabd-4984-8f21-235a9ab2f307-kube-api-access-x755c" (OuterVolumeSpecName: "kube-api-access-x755c") pod "b89c70f6-dabd-4984-8f21-235a9ab2f307" (UID: "b89c70f6-dabd-4984-8f21-235a9ab2f307"). InnerVolumeSpecName "kube-api-access-x755c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.880528 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b89c70f6-dabd-4984-8f21-235a9ab2f307" (UID: "b89c70f6-dabd-4984-8f21-235a9ab2f307"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.886654 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-inventory" (OuterVolumeSpecName: "inventory") pod "b89c70f6-dabd-4984-8f21-235a9ab2f307" (UID: "b89c70f6-dabd-4984-8f21-235a9ab2f307"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.955142 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x755c\" (UniqueName: \"kubernetes.io/projected/b89c70f6-dabd-4984-8f21-235a9ab2f307-kube-api-access-x755c\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.955182 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.955193 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.379623 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" event={"ID":"b89c70f6-dabd-4984-8f21-235a9ab2f307","Type":"ContainerDied","Data":"2a49ceb4b7dbf82deecb11fb0c020251ebb2772505ff22b814869fb7dfd8f913"} Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.379899 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a49ceb4b7dbf82deecb11fb0c020251ebb2772505ff22b814869fb7dfd8f913" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.379956 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.418462 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6"] Jan 30 14:12:54 crc kubenswrapper[4793]: E0130 14:12:54.418880 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b89c70f6-dabd-4984-8f21-235a9ab2f307" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.418905 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b89c70f6-dabd-4984-8f21-235a9ab2f307" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.419158 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b89c70f6-dabd-4984-8f21-235a9ab2f307" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.419872 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.422227 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.422794 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.423122 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.423284 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.431008 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6"] Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.486147 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s7wt\" (UniqueName: \"kubernetes.io/projected/2ba6b544-0042-43d7-abe9-bc40439f804b-kube-api-access-7s7wt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.486235 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.486402 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.486427 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.588447 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.588504 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.588559 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s7wt\" (UniqueName: \"kubernetes.io/projected/2ba6b544-0042-43d7-abe9-bc40439f804b-kube-api-access-7s7wt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.588619 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.593423 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.604453 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.604880 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.611929 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s7wt\" (UniqueName: \"kubernetes.io/projected/2ba6b544-0042-43d7-abe9-bc40439f804b-kube-api-access-7s7wt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.792760 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:55 crc kubenswrapper[4793]: I0130 14:12:55.295719 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6"] Jan 30 14:12:55 crc kubenswrapper[4793]: I0130 14:12:55.389698 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" event={"ID":"2ba6b544-0042-43d7-abe9-bc40439f804b","Type":"ContainerStarted","Data":"b0a20486d3bd914ea9a743f522b5e81673abd5990bf5c761a63ac5098352d1ae"} Jan 30 14:12:56 crc kubenswrapper[4793]: I0130 14:12:56.430244 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" event={"ID":"2ba6b544-0042-43d7-abe9-bc40439f804b","Type":"ContainerStarted","Data":"9c1a7842b45da0abe44314d798df617c5d0b04f46a40c3ce7525fbfda6de30dd"} Jan 30 14:12:56 crc kubenswrapper[4793]: I0130 14:12:56.430874 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" podStartSLOduration=2.011166983 podStartE2EDuration="2.430849495s" podCreationTimestamp="2026-01-30 14:12:54 +0000 UTC" firstStartedPulling="2026-01-30 14:12:55.30349179 +0000 UTC m=+1786.004840281" lastFinishedPulling="2026-01-30 14:12:55.723174282 +0000 UTC m=+1786.424522793" observedRunningTime="2026-01-30 14:12:56.42701247 +0000 UTC m=+1787.128360971" watchObservedRunningTime="2026-01-30 14:12:56.430849495 +0000 UTC m=+1787.132197986" Jan 30 14:13:03 crc kubenswrapper[4793]: I0130 14:13:03.399171 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:13:03 crc kubenswrapper[4793]: E0130 14:13:03.399744 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:13:17 crc kubenswrapper[4793]: I0130 14:13:17.397765 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:13:17 crc kubenswrapper[4793]: E0130 14:13:17.398334 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:13:28 crc kubenswrapper[4793]: I0130 14:13:28.399755 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:13:28 crc kubenswrapper[4793]: E0130 14:13:28.400488 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:13:40 crc kubenswrapper[4793]: I0130 14:13:40.405397 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:13:40 crc kubenswrapper[4793]: E0130 14:13:40.406399 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.277293 4793 scope.go:117] "RemoveContainer" containerID="1538087d2c16a6a8f0cfb34ccb93511ff0ccd4bdfcfc4ccc0a63b77916661e9e" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.310932 4793 scope.go:117] "RemoveContainer" containerID="aa6b97f9cf7eb4c606a580dd2ddef97d729ceaa61803153f00581b30e2022da8" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.343857 4793 scope.go:117] "RemoveContainer" containerID="a550c028a717096d5e1912e30909f7370216f5f1ecf7d5091df70cd1de2ebf87" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.366562 4793 scope.go:117] "RemoveContainer" containerID="4e43c7a23f4a490f4a7852a2f22ad1652b89482999fbd5408077c27f4ed89f64" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.413692 4793 scope.go:117] "RemoveContainer" containerID="9527fe1780f2fb9cca80bad053f2c7ec761fbbe892d439d87f943245f4fb87c3" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.443305 4793 scope.go:117] "RemoveContainer" containerID="6314864eaec40aa342c30cbdd74ccf5a6317bae25e0440cf92e8eb60bfb0deb4" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.472961 4793 scope.go:117] "RemoveContainer" containerID="4199787f9fba9bfc02645d135d0bde12d6b02a89d6508f5d6cbf72ca7396c3a8" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.493316 4793 scope.go:117] "RemoveContainer" containerID="0f0a92b67bf2c57b29668defe80c5ef06174933a3389b63d549a0beeb9490672" Jan 30 14:13:51 crc kubenswrapper[4793]: I0130 14:13:51.397893 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:13:51 crc kubenswrapper[4793]: E0130 14:13:51.399399 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:14:06 crc kubenswrapper[4793]: I0130 14:14:06.398596 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:14:06 crc kubenswrapper[4793]: E0130 14:14:06.399259 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:14:19 crc kubenswrapper[4793]: I0130 14:14:19.398528 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:14:19 crc kubenswrapper[4793]: E0130 14:14:19.399308 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:14:33 crc kubenswrapper[4793]: I0130 14:14:33.400089 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:14:33 crc kubenswrapper[4793]: E0130 14:14:33.401231 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:14:46 crc kubenswrapper[4793]: I0130 14:14:46.399345 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:14:46 crc kubenswrapper[4793]: E0130 14:14:46.400013 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:14:59 crc kubenswrapper[4793]: I0130 14:14:59.398785 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:14:59 crc kubenswrapper[4793]: E0130 14:14:59.399500 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.185360 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn"] Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.186731 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.188741 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.188746 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.327438 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn"] Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.361844 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea958b8-aeb8-4696-b604-f1459d6d5608-config-volume\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.361992 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea958b8-aeb8-4696-b604-f1459d6d5608-secret-volume\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.362074 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96dfh\" (UniqueName: \"kubernetes.io/projected/dea958b8-aeb8-4696-b604-f1459d6d5608-kube-api-access-96dfh\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.464080 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96dfh\" (UniqueName: \"kubernetes.io/projected/dea958b8-aeb8-4696-b604-f1459d6d5608-kube-api-access-96dfh\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.464145 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea958b8-aeb8-4696-b604-f1459d6d5608-config-volume\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.464250 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea958b8-aeb8-4696-b604-f1459d6d5608-secret-volume\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.467352 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea958b8-aeb8-4696-b604-f1459d6d5608-config-volume\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.472896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea958b8-aeb8-4696-b604-f1459d6d5608-secret-volume\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.485613 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96dfh\" (UniqueName: \"kubernetes.io/projected/dea958b8-aeb8-4696-b604-f1459d6d5608-kube-api-access-96dfh\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.507943 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.786953 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn"] Jan 30 14:15:01 crc kubenswrapper[4793]: I0130 14:15:01.632509 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" event={"ID":"dea958b8-aeb8-4696-b604-f1459d6d5608","Type":"ContainerStarted","Data":"169c63fb85351a767003e368e147b08afafad5a61c0c77bb947c35a8af5282ae"} Jan 30 14:15:01 crc kubenswrapper[4793]: I0130 14:15:01.632749 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" event={"ID":"dea958b8-aeb8-4696-b604-f1459d6d5608","Type":"ContainerStarted","Data":"4208e4c3725077003c23a3d4fbe0f314a927f813f20d0698586e821994c97e38"} Jan 30 14:15:01 crc kubenswrapper[4793]: I0130 14:15:01.653643 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" podStartSLOduration=1.6536207109999999 podStartE2EDuration="1.653620711s" podCreationTimestamp="2026-01-30 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:15:01.648035143 +0000 UTC m=+1912.349383644" watchObservedRunningTime="2026-01-30 14:15:01.653620711 +0000 UTC m=+1912.354969202" Jan 30 14:15:02 crc kubenswrapper[4793]: I0130 14:15:02.642436 4793 generic.go:334] "Generic (PLEG): container finished" podID="dea958b8-aeb8-4696-b604-f1459d6d5608" containerID="169c63fb85351a767003e368e147b08afafad5a61c0c77bb947c35a8af5282ae" exitCode=0 Jan 30 14:15:02 crc kubenswrapper[4793]: I0130 14:15:02.642484 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" event={"ID":"dea958b8-aeb8-4696-b604-f1459d6d5608","Type":"ContainerDied","Data":"169c63fb85351a767003e368e147b08afafad5a61c0c77bb947c35a8af5282ae"} Jan 30 14:15:03 crc kubenswrapper[4793]: I0130 14:15:03.993871 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.038809 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96dfh\" (UniqueName: \"kubernetes.io/projected/dea958b8-aeb8-4696-b604-f1459d6d5608-kube-api-access-96dfh\") pod \"dea958b8-aeb8-4696-b604-f1459d6d5608\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.038984 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea958b8-aeb8-4696-b604-f1459d6d5608-config-volume\") pod \"dea958b8-aeb8-4696-b604-f1459d6d5608\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.039016 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea958b8-aeb8-4696-b604-f1459d6d5608-secret-volume\") pod \"dea958b8-aeb8-4696-b604-f1459d6d5608\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.046552 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea958b8-aeb8-4696-b604-f1459d6d5608-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dea958b8-aeb8-4696-b604-f1459d6d5608" (UID: "dea958b8-aeb8-4696-b604-f1459d6d5608"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.050668 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea958b8-aeb8-4696-b604-f1459d6d5608-config-volume" (OuterVolumeSpecName: "config-volume") pod "dea958b8-aeb8-4696-b604-f1459d6d5608" (UID: "dea958b8-aeb8-4696-b604-f1459d6d5608"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.064429 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea958b8-aeb8-4696-b604-f1459d6d5608-kube-api-access-96dfh" (OuterVolumeSpecName: "kube-api-access-96dfh") pod "dea958b8-aeb8-4696-b604-f1459d6d5608" (UID: "dea958b8-aeb8-4696-b604-f1459d6d5608"). InnerVolumeSpecName "kube-api-access-96dfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.141320 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96dfh\" (UniqueName: \"kubernetes.io/projected/dea958b8-aeb8-4696-b604-f1459d6d5608-kube-api-access-96dfh\") on node \"crc\" DevicePath \"\"" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.141357 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea958b8-aeb8-4696-b604-f1459d6d5608-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.141366 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea958b8-aeb8-4696-b604-f1459d6d5608-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.669848 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" event={"ID":"dea958b8-aeb8-4696-b604-f1459d6d5608","Type":"ContainerDied","Data":"4208e4c3725077003c23a3d4fbe0f314a927f813f20d0698586e821994c97e38"} Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.670346 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4208e4c3725077003c23a3d4fbe0f314a927f813f20d0698586e821994c97e38" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.669924 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:10 crc kubenswrapper[4793]: I0130 14:15:10.407223 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:15:10 crc kubenswrapper[4793]: E0130 14:15:10.407931 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:15:17 crc kubenswrapper[4793]: I0130 14:15:17.219426 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-gbcdm"] Jan 30 14:15:17 crc kubenswrapper[4793]: I0130 14:15:17.225813 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-tq6pw"] Jan 30 14:15:17 crc kubenswrapper[4793]: I0130 14:15:17.247688 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-8pwcc"] Jan 30 14:15:17 crc kubenswrapper[4793]: I0130 14:15:17.259202 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-8pwcc"] Jan 30 14:15:17 crc kubenswrapper[4793]: I0130 14:15:17.268263 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-gbcdm"] Jan 30 14:15:17 crc kubenswrapper[4793]: I0130 14:15:17.276757 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-tq6pw"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.027181 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ff11-account-create-update-p5nhq"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.042370 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3a9f-account-create-update-zkbvj"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.052617 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-22a6-account-create-update-59kzd"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.063465 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ff11-account-create-update-p5nhq"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.071602 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3a9f-account-create-update-zkbvj"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.082591 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-22a6-account-create-update-59kzd"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.411549 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="563516b7-0256-4c05-b1d1-3aa03d692afb" path="/var/lib/kubelet/pods/563516b7-0256-4c05-b1d1-3aa03d692afb/volumes" Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.414399 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62fbb159-dc72-4c34-b2b7-5be6be4df981" path="/var/lib/kubelet/pods/62fbb159-dc72-4c34-b2b7-5be6be4df981/volumes" Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.415881 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d0f274e-c187-4f1a-aa78-508b1761f9fb" path="/var/lib/kubelet/pods/6d0f274e-c187-4f1a-aa78-508b1761f9fb/volumes" Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.417474 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98986ea8-62f3-4716-9451-0e13567ec2a1" path="/var/lib/kubelet/pods/98986ea8-62f3-4716-9451-0e13567ec2a1/volumes" Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.418471 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f03641-1e63-4c88-a1f4-f58cf0d81883" path="/var/lib/kubelet/pods/b3f03641-1e63-4c88-a1f4-f58cf0d81883/volumes" Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.420362 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f81f2e71-1a70-491f-ba0c-ad1a456345c8" path="/var/lib/kubelet/pods/f81f2e71-1a70-491f-ba0c-ad1a456345c8/volumes" Jan 30 14:15:22 crc kubenswrapper[4793]: I0130 14:15:22.399400 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:15:22 crc kubenswrapper[4793]: I0130 14:15:22.837642 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"c7109bad76c4800462c715a31fed08fa68ade41549aa0ee47196c92cb6ec6f9c"} Jan 30 14:15:36 crc kubenswrapper[4793]: I0130 14:15:36.052722 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ktlrj"] Jan 30 14:15:36 crc kubenswrapper[4793]: I0130 14:15:36.063620 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ktlrj"] Jan 30 14:15:36 crc kubenswrapper[4793]: I0130 14:15:36.413205 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec365c0b-f8d9-4b59-bb89-a583d1eb7257" path="/var/lib/kubelet/pods/ec365c0b-f8d9-4b59-bb89-a583d1eb7257/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.054974 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-29ee-account-create-update-56zfp"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.061517 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-t2ntm"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.078685 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3f03-account-create-update-s5gbm"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.089473 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ac9c-account-create-update-6cnjz"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.099555 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-t2ntm"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.108287 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-ac9c-account-create-update-6cnjz"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.115980 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-29ee-account-create-update-56zfp"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.124769 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3f03-account-create-update-s5gbm"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.133417 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-gvh75"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.140951 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-gvh75"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.148801 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-89mld"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.155557 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-89mld"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.412635 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13613099-2932-4476-8032-82095348fb10" path="/var/lib/kubelet/pods/13613099-2932-4476-8032-82095348fb10/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.416016 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f786311-b5ef-427f-b167-c49267de28c6" path="/var/lib/kubelet/pods/1f786311-b5ef-427f-b167-c49267de28c6/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.420793 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2392ab6f-ca9b-4211-bd23-a243ce0ee554" path="/var/lib/kubelet/pods/2392ab6f-ca9b-4211-bd23-a243ce0ee554/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.424190 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c07a623-53fe-44a2-9810-5d1137c659c3" path="/var/lib/kubelet/pods/6c07a623-53fe-44a2-9810-5d1137c659c3/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.426617 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa3c464-d85c-4ea1-816e-7dda86dbb9de" path="/var/lib/kubelet/pods/bfa3c464-d85c-4ea1-816e-7dda86dbb9de/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.430542 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00abb05-5932-47c8-9bd4-34014f966013" path="/var/lib/kubelet/pods/e00abb05-5932-47c8-9bd4-34014f966013/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.610306 4793 scope.go:117] "RemoveContainer" containerID="e076400efeb8dc1f3b157eb928b1925e404de84a86497e6441e959675b9ddf99" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.648390 4793 scope.go:117] "RemoveContainer" containerID="73aa5ec3639d3c82bba61c660ee7af7a234ef59082634808ca0ab14cf7b0d8b7" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.696427 4793 scope.go:117] "RemoveContainer" containerID="e2ff0ec9f064c9873b71344fa59a44b2ef666d7ccd24dbe878aa2ede8a23585c" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.741590 4793 scope.go:117] "RemoveContainer" containerID="b3caaa69aab524adb26fd9c4ff43996ac15d6994d1472ccaa076a079e9b6dba0" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.790135 4793 scope.go:117] "RemoveContainer" containerID="49617378d146339946d69a33ebd155e69d9eb4e257e62cbaa6d931330bc913ba" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.828761 4793 scope.go:117] "RemoveContainer" containerID="be7f675ca5c9219f83817d0e2dc9af6d1edad5191618166a3b580984eb47dd17" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.880570 4793 scope.go:117] "RemoveContainer" containerID="88e81edcf2367a38a7b0e1df9af6001a75b1047fd8c5d669cd70d0dad383c305" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.907158 4793 scope.go:117] "RemoveContainer" containerID="792c9fae56b3faf29df0bfe7bb192d950ab990e8d21594ce52765083cb10c12e" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.931626 4793 scope.go:117] "RemoveContainer" containerID="2bc34dab4f37d7b6429a87926db0d3a5178ff268821d2ee975bfe47cb007e77b" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.951251 4793 scope.go:117] "RemoveContainer" containerID="75d0a8131037e3e42e5261a0799894acdf4d57f9756c3dd89c681177ee69f801" Jan 30 14:15:49 crc kubenswrapper[4793]: I0130 14:15:49.002693 4793 scope.go:117] "RemoveContainer" containerID="43a04a7b0ede88204c3ce58512e165ac71ea34ba165695393273ca8c2ab37053" Jan 30 14:15:49 crc kubenswrapper[4793]: I0130 14:15:49.021875 4793 scope.go:117] "RemoveContainer" containerID="4a2aafe80408cac269537f00f3232599775bbba2b58f84e2c22d7bc9ff168a56" Jan 30 14:15:49 crc kubenswrapper[4793]: I0130 14:15:49.106959 4793 scope.go:117] "RemoveContainer" containerID="3efaeb1f3745caf5c2ff18e628906fd2ae05a6952ec9376aacd048e2c31a3cdb" Jan 30 14:15:54 crc kubenswrapper[4793]: I0130 14:15:54.048143 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-zbw76"] Jan 30 14:15:54 crc kubenswrapper[4793]: I0130 14:15:54.056556 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-zbw76"] Jan 30 14:15:54 crc kubenswrapper[4793]: I0130 14:15:54.414588 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caec468e-bf72-4c93-8b47-6aac4c7a0b3d" path="/var/lib/kubelet/pods/caec468e-bf72-4c93-8b47-6aac4c7a0b3d/volumes" Jan 30 14:15:57 crc kubenswrapper[4793]: I0130 14:15:57.033842 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-btxs9"] Jan 30 14:15:57 crc kubenswrapper[4793]: I0130 14:15:57.047270 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-btxs9"] Jan 30 14:15:58 crc kubenswrapper[4793]: I0130 14:15:58.409414 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b977757-3d3e-48e5-a1e2-d31ebeda138e" path="/var/lib/kubelet/pods/2b977757-3d3e-48e5-a1e2-d31ebeda138e/volumes" Jan 30 14:16:19 crc kubenswrapper[4793]: I0130 14:16:19.374078 4793 generic.go:334] "Generic (PLEG): container finished" podID="2ba6b544-0042-43d7-abe9-bc40439f804b" containerID="9c1a7842b45da0abe44314d798df617c5d0b04f46a40c3ce7525fbfda6de30dd" exitCode=0 Jan 30 14:16:19 crc kubenswrapper[4793]: I0130 14:16:19.374147 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" event={"ID":"2ba6b544-0042-43d7-abe9-bc40439f804b","Type":"ContainerDied","Data":"9c1a7842b45da0abe44314d798df617c5d0b04f46a40c3ce7525fbfda6de30dd"} Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.799130 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.841160 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-bootstrap-combined-ca-bundle\") pod \"2ba6b544-0042-43d7-abe9-bc40439f804b\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.841369 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-ssh-key-openstack-edpm-ipam\") pod \"2ba6b544-0042-43d7-abe9-bc40439f804b\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.841402 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-inventory\") pod \"2ba6b544-0042-43d7-abe9-bc40439f804b\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.841435 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s7wt\" (UniqueName: \"kubernetes.io/projected/2ba6b544-0042-43d7-abe9-bc40439f804b-kube-api-access-7s7wt\") pod \"2ba6b544-0042-43d7-abe9-bc40439f804b\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.856506 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba6b544-0042-43d7-abe9-bc40439f804b-kube-api-access-7s7wt" (OuterVolumeSpecName: "kube-api-access-7s7wt") pod "2ba6b544-0042-43d7-abe9-bc40439f804b" (UID: "2ba6b544-0042-43d7-abe9-bc40439f804b"). InnerVolumeSpecName "kube-api-access-7s7wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.856883 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2ba6b544-0042-43d7-abe9-bc40439f804b" (UID: "2ba6b544-0042-43d7-abe9-bc40439f804b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.870000 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-inventory" (OuterVolumeSpecName: "inventory") pod "2ba6b544-0042-43d7-abe9-bc40439f804b" (UID: "2ba6b544-0042-43d7-abe9-bc40439f804b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.901698 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2ba6b544-0042-43d7-abe9-bc40439f804b" (UID: "2ba6b544-0042-43d7-abe9-bc40439f804b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.943826 4793 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.943875 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.943890 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.943901 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s7wt\" (UniqueName: \"kubernetes.io/projected/2ba6b544-0042-43d7-abe9-bc40439f804b-kube-api-access-7s7wt\") on node \"crc\" DevicePath \"\"" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.392382 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" event={"ID":"2ba6b544-0042-43d7-abe9-bc40439f804b","Type":"ContainerDied","Data":"b0a20486d3bd914ea9a743f522b5e81673abd5990bf5c761a63ac5098352d1ae"} Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.392426 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0a20486d3bd914ea9a743f522b5e81673abd5990bf5c761a63ac5098352d1ae" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.392495 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.509276 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn"] Jan 30 14:16:21 crc kubenswrapper[4793]: E0130 14:16:21.509645 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea958b8-aeb8-4696-b604-f1459d6d5608" containerName="collect-profiles" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.509658 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea958b8-aeb8-4696-b604-f1459d6d5608" containerName="collect-profiles" Jan 30 14:16:21 crc kubenswrapper[4793]: E0130 14:16:21.509678 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba6b544-0042-43d7-abe9-bc40439f804b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.509685 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba6b544-0042-43d7-abe9-bc40439f804b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.511200 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea958b8-aeb8-4696-b604-f1459d6d5608" containerName="collect-profiles" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.511237 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba6b544-0042-43d7-abe9-bc40439f804b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.511847 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.514261 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.519975 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.520112 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.520300 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.533227 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn"] Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.572976 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.573523 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.573784 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk6ql\" (UniqueName: \"kubernetes.io/projected/f1632f4b-e0e5-4069-a77b-ae4f1911869b-kube-api-access-sk6ql\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.679901 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.679988 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk6ql\" (UniqueName: \"kubernetes.io/projected/f1632f4b-e0e5-4069-a77b-ae4f1911869b-kube-api-access-sk6ql\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.680081 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.683891 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.691578 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.696611 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk6ql\" (UniqueName: \"kubernetes.io/projected/f1632f4b-e0e5-4069-a77b-ae4f1911869b-kube-api-access-sk6ql\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.882015 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:22 crc kubenswrapper[4793]: I0130 14:16:22.381814 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn"] Jan 30 14:16:22 crc kubenswrapper[4793]: I0130 14:16:22.384765 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:16:22 crc kubenswrapper[4793]: I0130 14:16:22.408346 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" event={"ID":"f1632f4b-e0e5-4069-a77b-ae4f1911869b","Type":"ContainerStarted","Data":"4f82d849edc1d49a6b3562c2709f3f78a78f51f4b85225f15283609622841135"} Jan 30 14:16:23 crc kubenswrapper[4793]: I0130 14:16:23.416965 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" event={"ID":"f1632f4b-e0e5-4069-a77b-ae4f1911869b","Type":"ContainerStarted","Data":"23e76aba0770af4205b13b6be7f728153ae9d3e1a0ab347b0af1c9d3bfcaa979"} Jan 30 14:16:23 crc kubenswrapper[4793]: I0130 14:16:23.440795 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" podStartSLOduration=1.9378308400000002 podStartE2EDuration="2.440777308s" podCreationTimestamp="2026-01-30 14:16:21 +0000 UTC" firstStartedPulling="2026-01-30 14:16:22.384528164 +0000 UTC m=+1993.085876655" lastFinishedPulling="2026-01-30 14:16:22.887474632 +0000 UTC m=+1993.588823123" observedRunningTime="2026-01-30 14:16:23.439792514 +0000 UTC m=+1994.141141005" watchObservedRunningTime="2026-01-30 14:16:23.440777308 +0000 UTC m=+1994.142125799" Jan 30 14:16:49 crc kubenswrapper[4793]: I0130 14:16:49.465719 4793 scope.go:117] "RemoveContainer" containerID="2ab3f639f24308ca232423f0a32206d071a1ba8c33f3edef5fde8eec5d078500" Jan 30 14:16:49 crc kubenswrapper[4793]: I0130 14:16:49.506148 4793 scope.go:117] "RemoveContainer" containerID="aba07025654ae635089a8f296dddf9cfb274c709f33abf63aa5399408783166c" Jan 30 14:17:06 crc kubenswrapper[4793]: I0130 14:17:06.045581 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-k4pgl"] Jan 30 14:17:06 crc kubenswrapper[4793]: I0130 14:17:06.053623 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-k4pgl"] Jan 30 14:17:06 crc kubenswrapper[4793]: I0130 14:17:06.415490 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ea0161-c696-4578-a6f7-285a4253dc0f" path="/var/lib/kubelet/pods/b8ea0161-c696-4578-a6f7-285a4253dc0f/volumes" Jan 30 14:17:11 crc kubenswrapper[4793]: I0130 14:17:11.033952 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-kkrt6"] Jan 30 14:17:11 crc kubenswrapper[4793]: I0130 14:17:11.045027 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-kkrt6"] Jan 30 14:17:12 crc kubenswrapper[4793]: I0130 14:17:12.409775 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644bf4c3-aaaf-45fa-9692-73406a657226" path="/var/lib/kubelet/pods/644bf4c3-aaaf-45fa-9692-73406a657226/volumes" Jan 30 14:17:14 crc kubenswrapper[4793]: I0130 14:17:14.031683 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-gpt4t"] Jan 30 14:17:14 crc kubenswrapper[4793]: I0130 14:17:14.040636 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-gpt4t"] Jan 30 14:17:14 crc kubenswrapper[4793]: I0130 14:17:14.410119 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="126207f4-9b13-4892-aa15-0616a488af8c" path="/var/lib/kubelet/pods/126207f4-9b13-4892-aa15-0616a488af8c/volumes" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.462783 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mbmz8"] Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.464912 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.472740 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mbmz8"] Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.641401 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmskr\" (UniqueName: \"kubernetes.io/projected/8e44d38b-8b51-4589-bc6a-e69a004b83f6-kube-api-access-tmskr\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.641571 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-catalog-content\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.641605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-utilities\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.743561 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-catalog-content\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.743901 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-utilities\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.744027 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmskr\" (UniqueName: \"kubernetes.io/projected/8e44d38b-8b51-4589-bc6a-e69a004b83f6-kube-api-access-tmskr\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.744108 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-catalog-content\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.744374 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-utilities\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.768736 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmskr\" (UniqueName: \"kubernetes.io/projected/8e44d38b-8b51-4589-bc6a-e69a004b83f6-kube-api-access-tmskr\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:18 crc kubenswrapper[4793]: I0130 14:17:18.031696 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:18 crc kubenswrapper[4793]: I0130 14:17:18.319840 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mbmz8"] Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.058016 4793 generic.go:334] "Generic (PLEG): container finished" podID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerID="13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e" exitCode=0 Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.058142 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerDied","Data":"13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e"} Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.058167 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerStarted","Data":"8541f4e5dad7feb52e06e419a4a0323b953c46b0cd2b983f0cc2f7e0dc8bba8e"} Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.264008 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9jf58"] Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.266882 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.283788 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jf58"] Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.365303 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-utilities\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.365390 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b8f8\" (UniqueName: \"kubernetes.io/projected/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-kube-api-access-7b8f8\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.365435 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-catalog-content\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.467360 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b8f8\" (UniqueName: \"kubernetes.io/projected/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-kube-api-access-7b8f8\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.467425 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-catalog-content\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.467575 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-utilities\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.468179 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-utilities\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.468762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-catalog-content\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.488279 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b8f8\" (UniqueName: \"kubernetes.io/projected/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-kube-api-access-7b8f8\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.583369 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.891744 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lb62l"] Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.905496 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.976131 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lb62l"] Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.977384 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62742\" (UniqueName: \"kubernetes.io/projected/4d85b4c3-8b96-424c-a7f0-82257f2af0da-kube-api-access-62742\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.977455 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-utilities\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.977669 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-catalog-content\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.069363 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerStarted","Data":"710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c"} Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.082252 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62742\" (UniqueName: \"kubernetes.io/projected/4d85b4c3-8b96-424c-a7f0-82257f2af0da-kube-api-access-62742\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.082352 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-utilities\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.082445 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-catalog-content\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.082965 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-catalog-content\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.083759 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-utilities\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.111732 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62742\" (UniqueName: \"kubernetes.io/projected/4d85b4c3-8b96-424c-a7f0-82257f2af0da-kube-api-access-62742\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: W0130 14:17:20.172508 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd8c0f6_66a2_4eeb_889c_31dd7d8d8606.slice/crio-0726302ccbcd7f3c1d2adba2dc46be2001566bcb486632de14c89447ec6cb950 WatchSource:0}: Error finding container 0726302ccbcd7f3c1d2adba2dc46be2001566bcb486632de14c89447ec6cb950: Status 404 returned error can't find the container with id 0726302ccbcd7f3c1d2adba2dc46be2001566bcb486632de14c89447ec6cb950 Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.174200 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jf58"] Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.288794 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.791324 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lb62l"] Jan 30 14:17:21 crc kubenswrapper[4793]: I0130 14:17:21.078742 4793 generic.go:334] "Generic (PLEG): container finished" podID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerID="97e00f686b282180edd4c6895080d4ff4fea6b3dd37684dbd36be6025541ffd0" exitCode=0 Jan 30 14:17:21 crc kubenswrapper[4793]: I0130 14:17:21.078800 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerDied","Data":"97e00f686b282180edd4c6895080d4ff4fea6b3dd37684dbd36be6025541ffd0"} Jan 30 14:17:21 crc kubenswrapper[4793]: I0130 14:17:21.078876 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerStarted","Data":"0726302ccbcd7f3c1d2adba2dc46be2001566bcb486632de14c89447ec6cb950"} Jan 30 14:17:21 crc kubenswrapper[4793]: I0130 14:17:21.082121 4793 generic.go:334] "Generic (PLEG): container finished" podID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerID="e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5" exitCode=0 Jan 30 14:17:21 crc kubenswrapper[4793]: I0130 14:17:21.082164 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerDied","Data":"e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5"} Jan 30 14:17:21 crc kubenswrapper[4793]: I0130 14:17:21.082209 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerStarted","Data":"ee99dc24d6773b1ef81ef15f8abc22453a691035e3bb9cf3a583bb3c23f8c1e4"} Jan 30 14:17:22 crc kubenswrapper[4793]: I0130 14:17:22.086532 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4rknj"] Jan 30 14:17:22 crc kubenswrapper[4793]: I0130 14:17:22.095592 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4rknj"] Jan 30 14:17:22 crc kubenswrapper[4793]: I0130 14:17:22.408604 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" path="/var/lib/kubelet/pods/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd/volumes" Jan 30 14:17:23 crc kubenswrapper[4793]: I0130 14:17:23.107868 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerStarted","Data":"87ada9a6b5346c7032748aa17aea82f42d27a30601825dfb46499a4bfb7bf949"} Jan 30 14:17:23 crc kubenswrapper[4793]: I0130 14:17:23.110457 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerStarted","Data":"9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9"} Jan 30 14:17:23 crc kubenswrapper[4793]: I0130 14:17:23.112849 4793 generic.go:334] "Generic (PLEG): container finished" podID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerID="710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c" exitCode=0 Jan 30 14:17:23 crc kubenswrapper[4793]: I0130 14:17:23.112893 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerDied","Data":"710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c"} Jan 30 14:17:25 crc kubenswrapper[4793]: I0130 14:17:25.130520 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerStarted","Data":"4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227"} Jan 30 14:17:25 crc kubenswrapper[4793]: I0130 14:17:25.133506 4793 generic.go:334] "Generic (PLEG): container finished" podID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerID="87ada9a6b5346c7032748aa17aea82f42d27a30601825dfb46499a4bfb7bf949" exitCode=0 Jan 30 14:17:25 crc kubenswrapper[4793]: I0130 14:17:25.133552 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerDied","Data":"87ada9a6b5346c7032748aa17aea82f42d27a30601825dfb46499a4bfb7bf949"} Jan 30 14:17:25 crc kubenswrapper[4793]: I0130 14:17:25.150318 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mbmz8" podStartSLOduration=3.260544982 podStartE2EDuration="8.150297829s" podCreationTimestamp="2026-01-30 14:17:17 +0000 UTC" firstStartedPulling="2026-01-30 14:17:19.061017653 +0000 UTC m=+2049.762366144" lastFinishedPulling="2026-01-30 14:17:23.9507705 +0000 UTC m=+2054.652118991" observedRunningTime="2026-01-30 14:17:25.14909081 +0000 UTC m=+2055.850439301" watchObservedRunningTime="2026-01-30 14:17:25.150297829 +0000 UTC m=+2055.851646320" Jan 30 14:17:26 crc kubenswrapper[4793]: I0130 14:17:26.143663 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerStarted","Data":"085807c590a6db119c8b09a9c636c0a0db1e0e333c8a025332a79e249f76032c"} Jan 30 14:17:26 crc kubenswrapper[4793]: I0130 14:17:26.173757 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9jf58" podStartSLOduration=2.7055159399999997 podStartE2EDuration="7.173739939s" podCreationTimestamp="2026-01-30 14:17:19 +0000 UTC" firstStartedPulling="2026-01-30 14:17:21.080299175 +0000 UTC m=+2051.781647666" lastFinishedPulling="2026-01-30 14:17:25.548523174 +0000 UTC m=+2056.249871665" observedRunningTime="2026-01-30 14:17:26.171513986 +0000 UTC m=+2056.872862467" watchObservedRunningTime="2026-01-30 14:17:26.173739939 +0000 UTC m=+2056.875088430" Jan 30 14:17:28 crc kubenswrapper[4793]: I0130 14:17:28.033489 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:28 crc kubenswrapper[4793]: I0130 14:17:28.033604 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:28 crc kubenswrapper[4793]: I0130 14:17:28.092758 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:29 crc kubenswrapper[4793]: I0130 14:17:29.585137 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:29 crc kubenswrapper[4793]: I0130 14:17:29.585190 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:30 crc kubenswrapper[4793]: I0130 14:17:30.636886 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-9jf58" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="registry-server" probeResult="failure" output=< Jan 30 14:17:30 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:17:30 crc kubenswrapper[4793]: > Jan 30 14:17:33 crc kubenswrapper[4793]: I0130 14:17:33.206445 4793 generic.go:334] "Generic (PLEG): container finished" podID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerID="9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9" exitCode=0 Jan 30 14:17:33 crc kubenswrapper[4793]: I0130 14:17:33.206556 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerDied","Data":"9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9"} Jan 30 14:17:35 crc kubenswrapper[4793]: I0130 14:17:35.227497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerStarted","Data":"bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff"} Jan 30 14:17:35 crc kubenswrapper[4793]: I0130 14:17:35.249273 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lb62l" podStartSLOduration=3.107478985 podStartE2EDuration="16.249248191s" podCreationTimestamp="2026-01-30 14:17:19 +0000 UTC" firstStartedPulling="2026-01-30 14:17:21.084504816 +0000 UTC m=+2051.785853297" lastFinishedPulling="2026-01-30 14:17:34.226274002 +0000 UTC m=+2064.927622503" observedRunningTime="2026-01-30 14:17:35.246637617 +0000 UTC m=+2065.947986148" watchObservedRunningTime="2026-01-30 14:17:35.249248191 +0000 UTC m=+2065.950596722" Jan 30 14:17:38 crc kubenswrapper[4793]: I0130 14:17:38.035925 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9k2k7"] Jan 30 14:17:38 crc kubenswrapper[4793]: I0130 14:17:38.050871 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9k2k7"] Jan 30 14:17:38 crc kubenswrapper[4793]: I0130 14:17:38.090634 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:38 crc kubenswrapper[4793]: I0130 14:17:38.137129 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mbmz8"] Jan 30 14:17:38 crc kubenswrapper[4793]: I0130 14:17:38.254389 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mbmz8" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="registry-server" containerID="cri-o://4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227" gracePeriod=2 Jan 30 14:17:38 crc kubenswrapper[4793]: I0130 14:17:38.413558 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16a2a816-c28c-4d74-848a-2821a9d68d70" path="/var/lib/kubelet/pods/16a2a816-c28c-4d74-848a-2821a9d68d70/volumes" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.071817 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.183807 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-catalog-content\") pod \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.183907 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmskr\" (UniqueName: \"kubernetes.io/projected/8e44d38b-8b51-4589-bc6a-e69a004b83f6-kube-api-access-tmskr\") pod \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.183996 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-utilities\") pod \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.184689 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-utilities" (OuterVolumeSpecName: "utilities") pod "8e44d38b-8b51-4589-bc6a-e69a004b83f6" (UID: "8e44d38b-8b51-4589-bc6a-e69a004b83f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.199247 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e44d38b-8b51-4589-bc6a-e69a004b83f6-kube-api-access-tmskr" (OuterVolumeSpecName: "kube-api-access-tmskr") pod "8e44d38b-8b51-4589-bc6a-e69a004b83f6" (UID: "8e44d38b-8b51-4589-bc6a-e69a004b83f6"). InnerVolumeSpecName "kube-api-access-tmskr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.256276 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e44d38b-8b51-4589-bc6a-e69a004b83f6" (UID: "8e44d38b-8b51-4589-bc6a-e69a004b83f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.264312 4793 generic.go:334] "Generic (PLEG): container finished" podID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerID="4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227" exitCode=0 Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.264368 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerDied","Data":"4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227"} Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.264394 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerDied","Data":"8541f4e5dad7feb52e06e419a4a0323b953c46b0cd2b983f0cc2f7e0dc8bba8e"} Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.264410 4793 scope.go:117] "RemoveContainer" containerID="4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.264635 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.287309 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.287350 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmskr\" (UniqueName: \"kubernetes.io/projected/8e44d38b-8b51-4589-bc6a-e69a004b83f6-kube-api-access-tmskr\") on node \"crc\" DevicePath \"\"" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.287364 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.298278 4793 scope.go:117] "RemoveContainer" containerID="710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.326875 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mbmz8"] Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.333101 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mbmz8"] Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.412205 4793 scope.go:117] "RemoveContainer" containerID="13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.514122 4793 scope.go:117] "RemoveContainer" containerID="4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227" Jan 30 14:17:39 crc kubenswrapper[4793]: E0130 14:17:39.514537 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227\": container with ID starting with 4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227 not found: ID does not exist" containerID="4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.514649 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227"} err="failed to get container status \"4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227\": rpc error: code = NotFound desc = could not find container \"4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227\": container with ID starting with 4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227 not found: ID does not exist" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.514775 4793 scope.go:117] "RemoveContainer" containerID="710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c" Jan 30 14:17:39 crc kubenswrapper[4793]: E0130 14:17:39.515343 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c\": container with ID starting with 710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c not found: ID does not exist" containerID="710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.515374 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c"} err="failed to get container status \"710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c\": rpc error: code = NotFound desc = could not find container \"710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c\": container with ID starting with 710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c not found: ID does not exist" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.515387 4793 scope.go:117] "RemoveContainer" containerID="13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e" Jan 30 14:17:39 crc kubenswrapper[4793]: E0130 14:17:39.515627 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e\": container with ID starting with 13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e not found: ID does not exist" containerID="13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.515692 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e"} err="failed to get container status \"13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e\": rpc error: code = NotFound desc = could not find container \"13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e\": container with ID starting with 13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e not found: ID does not exist" Jan 30 14:17:40 crc kubenswrapper[4793]: I0130 14:17:40.290601 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:40 crc kubenswrapper[4793]: I0130 14:17:40.292145 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:40 crc kubenswrapper[4793]: I0130 14:17:40.413115 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" path="/var/lib/kubelet/pods/8e44d38b-8b51-4589-bc6a-e69a004b83f6/volumes" Jan 30 14:17:40 crc kubenswrapper[4793]: I0130 14:17:40.636097 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-9jf58" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="registry-server" probeResult="failure" output=< Jan 30 14:17:40 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:17:40 crc kubenswrapper[4793]: > Jan 30 14:17:41 crc kubenswrapper[4793]: I0130 14:17:41.342152 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lb62l" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" probeResult="failure" output=< Jan 30 14:17:41 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:17:41 crc kubenswrapper[4793]: > Jan 30 14:17:42 crc kubenswrapper[4793]: I0130 14:17:42.414098 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:17:42 crc kubenswrapper[4793]: I0130 14:17:42.414399 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.634320 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.671279 4793 scope.go:117] "RemoveContainer" containerID="f6239492972507362decef8f67d6e0f6bc2cfcc0fcc4cf32f831f0f6c07c0017" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.697670 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.733387 4793 scope.go:117] "RemoveContainer" containerID="3517173292e25a5ef43fbeee36943507781e2a1f6b290f89494c3211b1e796ba" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.763290 4793 scope.go:117] "RemoveContainer" containerID="32ceb7dc9fa876395c4ca9e0e8f70660c79f4304088a586ce49eb1e832993592" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.819249 4793 scope.go:117] "RemoveContainer" containerID="ae10414b3d00dc4ceb2bc58d35069ffd261cdc4f3583eb5ebdf5decfcf70c2e6" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.944754 4793 scope.go:117] "RemoveContainer" containerID="bff2e9040ab8d382d57ee633ed0d4b720e96e3be65ded6621d8b7a51d1e715d7" Jan 30 14:17:51 crc kubenswrapper[4793]: I0130 14:17:51.066615 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jf58"] Jan 30 14:17:51 crc kubenswrapper[4793]: I0130 14:17:51.339138 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lb62l" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" probeResult="failure" output=< Jan 30 14:17:51 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:17:51 crc kubenswrapper[4793]: > Jan 30 14:17:51 crc kubenswrapper[4793]: I0130 14:17:51.375981 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9jf58" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="registry-server" containerID="cri-o://085807c590a6db119c8b09a9c636c0a0db1e0e333c8a025332a79e249f76032c" gracePeriod=2 Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.390371 4793 generic.go:334] "Generic (PLEG): container finished" podID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerID="085807c590a6db119c8b09a9c636c0a0db1e0e333c8a025332a79e249f76032c" exitCode=0 Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.390609 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerDied","Data":"085807c590a6db119c8b09a9c636c0a0db1e0e333c8a025332a79e249f76032c"} Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.390637 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerDied","Data":"0726302ccbcd7f3c1d2adba2dc46be2001566bcb486632de14c89447ec6cb950"} Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.390659 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0726302ccbcd7f3c1d2adba2dc46be2001566bcb486632de14c89447ec6cb950" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.428831 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.556520 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b8f8\" (UniqueName: \"kubernetes.io/projected/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-kube-api-access-7b8f8\") pod \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.556907 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-utilities\") pod \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.557366 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-catalog-content\") pod \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.557676 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-utilities" (OuterVolumeSpecName: "utilities") pod "cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" (UID: "cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.558188 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.564358 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-kube-api-access-7b8f8" (OuterVolumeSpecName: "kube-api-access-7b8f8") pod "cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" (UID: "cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606"). InnerVolumeSpecName "kube-api-access-7b8f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.585611 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" (UID: "cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.660370 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b8f8\" (UniqueName: \"kubernetes.io/projected/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-kube-api-access-7b8f8\") on node \"crc\" DevicePath \"\"" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.660405 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:17:53 crc kubenswrapper[4793]: I0130 14:17:53.397151 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:53 crc kubenswrapper[4793]: I0130 14:17:53.432134 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jf58"] Jan 30 14:17:53 crc kubenswrapper[4793]: I0130 14:17:53.443017 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jf58"] Jan 30 14:17:54 crc kubenswrapper[4793]: I0130 14:17:54.411369 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" path="/var/lib/kubelet/pods/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606/volumes" Jan 30 14:18:01 crc kubenswrapper[4793]: I0130 14:18:01.341537 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lb62l" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" probeResult="failure" output=< Jan 30 14:18:01 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:18:01 crc kubenswrapper[4793]: > Jan 30 14:18:09 crc kubenswrapper[4793]: I0130 14:18:09.550167 4793 generic.go:334] "Generic (PLEG): container finished" podID="f1632f4b-e0e5-4069-a77b-ae4f1911869b" containerID="23e76aba0770af4205b13b6be7f728153ae9d3e1a0ab347b0af1c9d3bfcaa979" exitCode=0 Jan 30 14:18:09 crc kubenswrapper[4793]: I0130 14:18:09.550228 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" event={"ID":"f1632f4b-e0e5-4069-a77b-ae4f1911869b","Type":"ContainerDied","Data":"23e76aba0770af4205b13b6be7f728153ae9d3e1a0ab347b0af1c9d3bfcaa979"} Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.053905 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.218567 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-ssh-key-openstack-edpm-ipam\") pod \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.218618 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-inventory\") pod \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.218661 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk6ql\" (UniqueName: \"kubernetes.io/projected/f1632f4b-e0e5-4069-a77b-ae4f1911869b-kube-api-access-sk6ql\") pod \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.226011 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1632f4b-e0e5-4069-a77b-ae4f1911869b-kube-api-access-sk6ql" (OuterVolumeSpecName: "kube-api-access-sk6ql") pod "f1632f4b-e0e5-4069-a77b-ae4f1911869b" (UID: "f1632f4b-e0e5-4069-a77b-ae4f1911869b"). InnerVolumeSpecName "kube-api-access-sk6ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.256519 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f1632f4b-e0e5-4069-a77b-ae4f1911869b" (UID: "f1632f4b-e0e5-4069-a77b-ae4f1911869b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.261416 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-inventory" (OuterVolumeSpecName: "inventory") pod "f1632f4b-e0e5-4069-a77b-ae4f1911869b" (UID: "f1632f4b-e0e5-4069-a77b-ae4f1911869b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.321353 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.321391 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.321403 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk6ql\" (UniqueName: \"kubernetes.io/projected/f1632f4b-e0e5-4069-a77b-ae4f1911869b-kube-api-access-sk6ql\") on node \"crc\" DevicePath \"\"" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.341191 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lb62l" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" probeResult="failure" output=< Jan 30 14:18:11 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:18:11 crc kubenswrapper[4793]: > Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.589722 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" event={"ID":"f1632f4b-e0e5-4069-a77b-ae4f1911869b","Type":"ContainerDied","Data":"4f82d849edc1d49a6b3562c2709f3f78a78f51f4b85225f15283609622841135"} Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.589776 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f82d849edc1d49a6b3562c2709f3f78a78f51f4b85225f15283609622841135" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.589800 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.648496 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc"] Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.649228 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1632f4b-e0e5-4069-a77b-ae4f1911869b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.649310 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1632f4b-e0e5-4069-a77b-ae4f1911869b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.649429 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="extract-utilities" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.649536 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="extract-utilities" Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.649639 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="extract-content" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.649713 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="extract-content" Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.649786 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="extract-content" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.649839 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="extract-content" Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.649899 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="registry-server" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.649958 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="registry-server" Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.650025 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="extract-utilities" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.650096 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="extract-utilities" Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.650163 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="registry-server" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.650242 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="registry-server" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.650489 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1632f4b-e0e5-4069-a77b-ae4f1911869b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.650594 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="registry-server" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.650659 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="registry-server" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.651318 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.656907 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.656954 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.657003 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.657177 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.665216 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc"] Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.729042 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7dcb\" (UniqueName: \"kubernetes.io/projected/260f1ea9-6ba5-40aa-ab56-e95237cb1009-kube-api-access-v7dcb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.729170 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.729202 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.831455 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.831525 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.831661 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7dcb\" (UniqueName: \"kubernetes.io/projected/260f1ea9-6ba5-40aa-ab56-e95237cb1009-kube-api-access-v7dcb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.837192 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.838200 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.852300 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7dcb\" (UniqueName: \"kubernetes.io/projected/260f1ea9-6ba5-40aa-ab56-e95237cb1009-kube-api-access-v7dcb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.972909 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:12 crc kubenswrapper[4793]: I0130 14:18:12.414229 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:18:12 crc kubenswrapper[4793]: I0130 14:18:12.414485 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:18:12 crc kubenswrapper[4793]: I0130 14:18:12.515400 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc"] Jan 30 14:18:12 crc kubenswrapper[4793]: I0130 14:18:12.599989 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" event={"ID":"260f1ea9-6ba5-40aa-ab56-e95237cb1009","Type":"ContainerStarted","Data":"bcd3b8c67e7c3da4fa975f67cb3075ff012ce7cd853f89c9542d544b042c3436"} Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.054836 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6ttpt"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.072083 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-a772-account-create-update-4n7jm"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.088856 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-n6kxs"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.104876 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-k8j4t"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.117281 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-a772-account-create-update-4n7jm"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.130674 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-n6kxs"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.140840 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6ttpt"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.151694 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-k8j4t"] Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.035825 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e189-account-create-update-hp64h"] Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.049023 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e189-account-create-update-hp64h"] Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.062524 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-5737-account-create-update-7wpgl"] Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.073332 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-5737-account-create-update-7wpgl"] Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.409686 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20523849-0caa-42b2-9b52-d5661f90ea95" path="/var/lib/kubelet/pods/20523849-0caa-42b2-9b52-d5661f90ea95/volumes" Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.410922 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22f1b95b-bf17-486c-a4b0-0a2aa96cf847" path="/var/lib/kubelet/pods/22f1b95b-bf17-486c-a4b0-0a2aa96cf847/volumes" Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.411792 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a263a6b-c717-4bb9-ae46-edfd534e347f" path="/var/lib/kubelet/pods/6a263a6b-c717-4bb9-ae46-edfd534e347f/volumes" Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.412543 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" path="/var/lib/kubelet/pods/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167/volumes" Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.413679 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec60191-c8b7-4d7a-a69f-765a9652878b" path="/var/lib/kubelet/pods/aec60191-c8b7-4d7a-a69f-765a9652878b/volumes" Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.414304 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed8e6fd4-c884-4a5d-8189-3929beafa311" path="/var/lib/kubelet/pods/ed8e6fd4-c884-4a5d-8189-3929beafa311/volumes" Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.621551 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" event={"ID":"260f1ea9-6ba5-40aa-ab56-e95237cb1009","Type":"ContainerStarted","Data":"a683476bd8aa939b00c339db91216a1956614d78f5849fe148f48cb8ff8b0d51"} Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.642289 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" podStartSLOduration=2.773588899 podStartE2EDuration="3.642272396s" podCreationTimestamp="2026-01-30 14:18:11 +0000 UTC" firstStartedPulling="2026-01-30 14:18:12.526918599 +0000 UTC m=+2103.228267090" lastFinishedPulling="2026-01-30 14:18:13.395602096 +0000 UTC m=+2104.096950587" observedRunningTime="2026-01-30 14:18:14.636006125 +0000 UTC m=+2105.337354626" watchObservedRunningTime="2026-01-30 14:18:14.642272396 +0000 UTC m=+2105.343620887" Jan 30 14:18:20 crc kubenswrapper[4793]: I0130 14:18:20.345479 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:18:20 crc kubenswrapper[4793]: I0130 14:18:20.413268 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:18:20 crc kubenswrapper[4793]: I0130 14:18:20.580468 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lb62l"] Jan 30 14:18:21 crc kubenswrapper[4793]: I0130 14:18:21.698879 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lb62l" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" containerID="cri-o://bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff" gracePeriod=2 Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.188936 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.351215 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-utilities\") pod \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.351458 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-catalog-content\") pod \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.351566 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62742\" (UniqueName: \"kubernetes.io/projected/4d85b4c3-8b96-424c-a7f0-82257f2af0da-kube-api-access-62742\") pod \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.352184 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-utilities" (OuterVolumeSpecName: "utilities") pod "4d85b4c3-8b96-424c-a7f0-82257f2af0da" (UID: "4d85b4c3-8b96-424c-a7f0-82257f2af0da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.359933 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d85b4c3-8b96-424c-a7f0-82257f2af0da-kube-api-access-62742" (OuterVolumeSpecName: "kube-api-access-62742") pod "4d85b4c3-8b96-424c-a7f0-82257f2af0da" (UID: "4d85b4c3-8b96-424c-a7f0-82257f2af0da"). InnerVolumeSpecName "kube-api-access-62742". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.454298 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62742\" (UniqueName: \"kubernetes.io/projected/4d85b4c3-8b96-424c-a7f0-82257f2af0da-kube-api-access-62742\") on node \"crc\" DevicePath \"\"" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.454330 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.487367 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d85b4c3-8b96-424c-a7f0-82257f2af0da" (UID: "4d85b4c3-8b96-424c-a7f0-82257f2af0da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.557530 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.709850 4793 generic.go:334] "Generic (PLEG): container finished" podID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerID="bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff" exitCode=0 Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.710992 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.711039 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerDied","Data":"bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff"} Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.711614 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerDied","Data":"ee99dc24d6773b1ef81ef15f8abc22453a691035e3bb9cf3a583bb3c23f8c1e4"} Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.711642 4793 scope.go:117] "RemoveContainer" containerID="bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.748543 4793 scope.go:117] "RemoveContainer" containerID="9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.749644 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lb62l"] Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.758153 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lb62l"] Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.869917 4793 scope.go:117] "RemoveContainer" containerID="e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.908185 4793 scope.go:117] "RemoveContainer" containerID="bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff" Jan 30 14:18:22 crc kubenswrapper[4793]: E0130 14:18:22.908687 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff\": container with ID starting with bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff not found: ID does not exist" containerID="bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.908819 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff"} err="failed to get container status \"bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff\": rpc error: code = NotFound desc = could not find container \"bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff\": container with ID starting with bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff not found: ID does not exist" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.908926 4793 scope.go:117] "RemoveContainer" containerID="9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9" Jan 30 14:18:22 crc kubenswrapper[4793]: E0130 14:18:22.909375 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9\": container with ID starting with 9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9 not found: ID does not exist" containerID="9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.909406 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9"} err="failed to get container status \"9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9\": rpc error: code = NotFound desc = could not find container \"9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9\": container with ID starting with 9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9 not found: ID does not exist" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.909427 4793 scope.go:117] "RemoveContainer" containerID="e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5" Jan 30 14:18:22 crc kubenswrapper[4793]: E0130 14:18:22.909764 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5\": container with ID starting with e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5 not found: ID does not exist" containerID="e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.909868 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5"} err="failed to get container status \"e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5\": rpc error: code = NotFound desc = could not find container \"e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5\": container with ID starting with e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5 not found: ID does not exist" Jan 30 14:18:24 crc kubenswrapper[4793]: I0130 14:18:24.419811 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" path="/var/lib/kubelet/pods/4d85b4c3-8b96-424c-a7f0-82257f2af0da/volumes" Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.414070 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.414560 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.414598 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.415288 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7109bad76c4800462c715a31fed08fa68ade41549aa0ee47196c92cb6ec6f9c"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.415339 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://c7109bad76c4800462c715a31fed08fa68ade41549aa0ee47196c92cb6ec6f9c" gracePeriod=600 Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.936619 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="c7109bad76c4800462c715a31fed08fa68ade41549aa0ee47196c92cb6ec6f9c" exitCode=0 Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.936670 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"c7109bad76c4800462c715a31fed08fa68ade41549aa0ee47196c92cb6ec6f9c"} Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.937342 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19"} Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.937413 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:18:50 crc kubenswrapper[4793]: I0130 14:18:50.184384 4793 scope.go:117] "RemoveContainer" containerID="28e59e6d294030a165a0e0fc52790f5c8159b9e2c9ea4959f3f53fbe499b4fb9" Jan 30 14:18:50 crc kubenswrapper[4793]: I0130 14:18:50.216778 4793 scope.go:117] "RemoveContainer" containerID="133cf9e3114502e1ed2ef3647567a9a7de600e92d2628121b7ac9be1e2e984c3" Jan 30 14:18:50 crc kubenswrapper[4793]: I0130 14:18:50.272210 4793 scope.go:117] "RemoveContainer" containerID="3016aa7ef767c45f0d4890b13b4c41ef50790ae3c4b545cc67b0d6c6e822f10c" Jan 30 14:18:50 crc kubenswrapper[4793]: I0130 14:18:50.320760 4793 scope.go:117] "RemoveContainer" containerID="2cde16956ce50cc3200c2a37b29cfb6df4e189b94634b0673b55f35da9470b1a" Jan 30 14:18:50 crc kubenswrapper[4793]: I0130 14:18:50.390864 4793 scope.go:117] "RemoveContainer" containerID="8dcf35a2124b97e38202260bc4331118f9488517abad0d7a3392779f07bd54b6" Jan 30 14:18:50 crc kubenswrapper[4793]: I0130 14:18:50.435815 4793 scope.go:117] "RemoveContainer" containerID="de572dff5d2f58a1803be7f7064305ab032e127eb6c4e1ab6668a1723190ad57" Jan 30 14:19:12 crc kubenswrapper[4793]: I0130 14:19:12.052757 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w8lcj"] Jan 30 14:19:12 crc kubenswrapper[4793]: I0130 14:19:12.061485 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w8lcj"] Jan 30 14:19:12 crc kubenswrapper[4793]: I0130 14:19:12.416935 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba071cd-0f26-432d-809e-709cad1a1e64" path="/var/lib/kubelet/pods/4ba071cd-0f26-432d-809e-709cad1a1e64/volumes" Jan 30 14:19:35 crc kubenswrapper[4793]: I0130 14:19:35.037929 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-75k58"] Jan 30 14:19:35 crc kubenswrapper[4793]: I0130 14:19:35.046016 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-75k58"] Jan 30 14:19:36 crc kubenswrapper[4793]: I0130 14:19:36.409445 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebcc9239-aedb-41d4-bac8-d03c56c76f4a" path="/var/lib/kubelet/pods/ebcc9239-aedb-41d4-bac8-d03c56c76f4a/volumes" Jan 30 14:19:38 crc kubenswrapper[4793]: I0130 14:19:38.423026 4793 generic.go:334] "Generic (PLEG): container finished" podID="260f1ea9-6ba5-40aa-ab56-e95237cb1009" containerID="a683476bd8aa939b00c339db91216a1956614d78f5849fe148f48cb8ff8b0d51" exitCode=0 Jan 30 14:19:38 crc kubenswrapper[4793]: I0130 14:19:38.423135 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" event={"ID":"260f1ea9-6ba5-40aa-ab56-e95237cb1009","Type":"ContainerDied","Data":"a683476bd8aa939b00c339db91216a1956614d78f5849fe148f48cb8ff8b0d51"} Jan 30 14:19:39 crc kubenswrapper[4793]: I0130 14:19:39.030147 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ml6ks"] Jan 30 14:19:39 crc kubenswrapper[4793]: I0130 14:19:39.039777 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ml6ks"] Jan 30 14:19:39 crc kubenswrapper[4793]: I0130 14:19:39.857847 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.017132 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7dcb\" (UniqueName: \"kubernetes.io/projected/260f1ea9-6ba5-40aa-ab56-e95237cb1009-kube-api-access-v7dcb\") pod \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.017170 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-ssh-key-openstack-edpm-ipam\") pod \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.017267 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-inventory\") pod \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.026345 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/260f1ea9-6ba5-40aa-ab56-e95237cb1009-kube-api-access-v7dcb" (OuterVolumeSpecName: "kube-api-access-v7dcb") pod "260f1ea9-6ba5-40aa-ab56-e95237cb1009" (UID: "260f1ea9-6ba5-40aa-ab56-e95237cb1009"). InnerVolumeSpecName "kube-api-access-v7dcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.046386 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-inventory" (OuterVolumeSpecName: "inventory") pod "260f1ea9-6ba5-40aa-ab56-e95237cb1009" (UID: "260f1ea9-6ba5-40aa-ab56-e95237cb1009"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.052633 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "260f1ea9-6ba5-40aa-ab56-e95237cb1009" (UID: "260f1ea9-6ba5-40aa-ab56-e95237cb1009"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.120658 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7dcb\" (UniqueName: \"kubernetes.io/projected/260f1ea9-6ba5-40aa-ab56-e95237cb1009-kube-api-access-v7dcb\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.120696 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.120707 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.429145 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45bc0c92-8817-447f-a591-d593d49d1b22" path="/var/lib/kubelet/pods/45bc0c92-8817-447f-a591-d593d49d1b22/volumes" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.446700 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" event={"ID":"260f1ea9-6ba5-40aa-ab56-e95237cb1009","Type":"ContainerDied","Data":"bcd3b8c67e7c3da4fa975f67cb3075ff012ce7cd853f89c9542d544b042c3436"} Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.446748 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcd3b8c67e7c3da4fa975f67cb3075ff012ce7cd853f89c9542d544b042c3436" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.446841 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:19:40 crc kubenswrapper[4793]: E0130 14:19:40.484696 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod260f1ea9_6ba5_40aa_ab56_e95237cb1009.slice/crio-bcd3b8c67e7c3da4fa975f67cb3075ff012ce7cd853f89c9542d544b042c3436\": RecentStats: unable to find data in memory cache]" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.543398 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt"] Jan 30 14:19:40 crc kubenswrapper[4793]: E0130 14:19:40.543959 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="extract-utilities" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.543984 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="extract-utilities" Jan 30 14:19:40 crc kubenswrapper[4793]: E0130 14:19:40.544020 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="extract-content" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.544028 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="extract-content" Jan 30 14:19:40 crc kubenswrapper[4793]: E0130 14:19:40.544042 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260f1ea9-6ba5-40aa-ab56-e95237cb1009" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.544623 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="260f1ea9-6ba5-40aa-ab56-e95237cb1009" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 14:19:40 crc kubenswrapper[4793]: E0130 14:19:40.544643 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.544649 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.544828 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="260f1ea9-6ba5-40aa-ab56-e95237cb1009" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.544850 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.545541 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.551406 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.551543 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.551673 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.552094 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.557540 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt"] Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.632700 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.632784 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbpg\" (UniqueName: \"kubernetes.io/projected/dcc6f491-d722-48e4-bcb8-8a9de7603786-kube-api-access-dwbpg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.632872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.733919 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.734040 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwbpg\" (UniqueName: \"kubernetes.io/projected/dcc6f491-d722-48e4-bcb8-8a9de7603786-kube-api-access-dwbpg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.734777 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.738122 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.738456 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.752342 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwbpg\" (UniqueName: \"kubernetes.io/projected/dcc6f491-d722-48e4-bcb8-8a9de7603786-kube-api-access-dwbpg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.869568 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:41 crc kubenswrapper[4793]: I0130 14:19:41.368812 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt"] Jan 30 14:19:41 crc kubenswrapper[4793]: I0130 14:19:41.454933 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" event={"ID":"dcc6f491-d722-48e4-bcb8-8a9de7603786","Type":"ContainerStarted","Data":"b30b161b2c886673222efbf4812da71581156b85df480b4917abb89388fa0ed3"} Jan 30 14:19:43 crc kubenswrapper[4793]: I0130 14:19:43.475403 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" event={"ID":"dcc6f491-d722-48e4-bcb8-8a9de7603786","Type":"ContainerStarted","Data":"0d34f2957d2ad401e219ae0354f20a2ece09cdf58a83fa508fad82e05c0cdbeb"} Jan 30 14:19:43 crc kubenswrapper[4793]: I0130 14:19:43.495616 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" podStartSLOduration=2.579955269 podStartE2EDuration="3.495594061s" podCreationTimestamp="2026-01-30 14:19:40 +0000 UTC" firstStartedPulling="2026-01-30 14:19:41.374147227 +0000 UTC m=+2192.075495718" lastFinishedPulling="2026-01-30 14:19:42.289786019 +0000 UTC m=+2192.991134510" observedRunningTime="2026-01-30 14:19:43.492280661 +0000 UTC m=+2194.193629162" watchObservedRunningTime="2026-01-30 14:19:43.495594061 +0000 UTC m=+2194.196942552" Jan 30 14:19:47 crc kubenswrapper[4793]: I0130 14:19:47.519752 4793 generic.go:334] "Generic (PLEG): container finished" podID="dcc6f491-d722-48e4-bcb8-8a9de7603786" containerID="0d34f2957d2ad401e219ae0354f20a2ece09cdf58a83fa508fad82e05c0cdbeb" exitCode=0 Jan 30 14:19:47 crc kubenswrapper[4793]: I0130 14:19:47.520249 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" event={"ID":"dcc6f491-d722-48e4-bcb8-8a9de7603786","Type":"ContainerDied","Data":"0d34f2957d2ad401e219ae0354f20a2ece09cdf58a83fa508fad82e05c0cdbeb"} Jan 30 14:19:48 crc kubenswrapper[4793]: I0130 14:19:48.997193 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.126163 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-ssh-key-openstack-edpm-ipam\") pod \"dcc6f491-d722-48e4-bcb8-8a9de7603786\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.126306 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-inventory\") pod \"dcc6f491-d722-48e4-bcb8-8a9de7603786\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.126438 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwbpg\" (UniqueName: \"kubernetes.io/projected/dcc6f491-d722-48e4-bcb8-8a9de7603786-kube-api-access-dwbpg\") pod \"dcc6f491-d722-48e4-bcb8-8a9de7603786\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.134463 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcc6f491-d722-48e4-bcb8-8a9de7603786-kube-api-access-dwbpg" (OuterVolumeSpecName: "kube-api-access-dwbpg") pod "dcc6f491-d722-48e4-bcb8-8a9de7603786" (UID: "dcc6f491-d722-48e4-bcb8-8a9de7603786"). InnerVolumeSpecName "kube-api-access-dwbpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.156906 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-inventory" (OuterVolumeSpecName: "inventory") pod "dcc6f491-d722-48e4-bcb8-8a9de7603786" (UID: "dcc6f491-d722-48e4-bcb8-8a9de7603786"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.159833 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dcc6f491-d722-48e4-bcb8-8a9de7603786" (UID: "dcc6f491-d722-48e4-bcb8-8a9de7603786"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.229065 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.229108 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwbpg\" (UniqueName: \"kubernetes.io/projected/dcc6f491-d722-48e4-bcb8-8a9de7603786-kube-api-access-dwbpg\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.229124 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.539472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" event={"ID":"dcc6f491-d722-48e4-bcb8-8a9de7603786","Type":"ContainerDied","Data":"b30b161b2c886673222efbf4812da71581156b85df480b4917abb89388fa0ed3"} Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.539517 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b30b161b2c886673222efbf4812da71581156b85df480b4917abb89388fa0ed3" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.539524 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.679225 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr"] Jan 30 14:19:49 crc kubenswrapper[4793]: E0130 14:19:49.679717 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc6f491-d722-48e4-bcb8-8a9de7603786" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.679742 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc6f491-d722-48e4-bcb8-8a9de7603786" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.679925 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcc6f491-d722-48e4-bcb8-8a9de7603786" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.680570 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.685658 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.685908 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.686200 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.689801 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr"] Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.690036 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.840604 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.840672 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk8b5\" (UniqueName: \"kubernetes.io/projected/1ee9c552-088f-4e61-961e-7062bf6e874b-kube-api-access-rk8b5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.840803 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.942328 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.942383 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk8b5\" (UniqueName: \"kubernetes.io/projected/1ee9c552-088f-4e61-961e-7062bf6e874b-kube-api-access-rk8b5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.942421 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.947683 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.948632 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.964425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk8b5\" (UniqueName: \"kubernetes.io/projected/1ee9c552-088f-4e61-961e-7062bf6e874b-kube-api-access-rk8b5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:50 crc kubenswrapper[4793]: I0130 14:19:50.007176 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:50 crc kubenswrapper[4793]: I0130 14:19:50.516236 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr"] Jan 30 14:19:50 crc kubenswrapper[4793]: I0130 14:19:50.554178 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" event={"ID":"1ee9c552-088f-4e61-961e-7062bf6e874b","Type":"ContainerStarted","Data":"198e531d99fd0bd9e1dbdbead68ffefc142e56214e16f99f17371a7795b85dcf"} Jan 30 14:19:50 crc kubenswrapper[4793]: I0130 14:19:50.616698 4793 scope.go:117] "RemoveContainer" containerID="90b9675474db2f014b16f6ff676632a8fb2215b39c16f9464ddb8818d9838269" Jan 30 14:19:50 crc kubenswrapper[4793]: I0130 14:19:50.662713 4793 scope.go:117] "RemoveContainer" containerID="c3407efb2fdb58b554465a66ada59f330d66ff60faa105c9e72328442584be37" Jan 30 14:19:50 crc kubenswrapper[4793]: I0130 14:19:50.708986 4793 scope.go:117] "RemoveContainer" containerID="d5dca6794b88409e9b00ca4874a836a8fc72adc63350f5d3d74d780410a0a920" Jan 30 14:19:51 crc kubenswrapper[4793]: I0130 14:19:51.573806 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" event={"ID":"1ee9c552-088f-4e61-961e-7062bf6e874b","Type":"ContainerStarted","Data":"caeb3293818ec051ac12e0602b0d244314fd25439754a9c03c0a1727737001ef"} Jan 30 14:19:51 crc kubenswrapper[4793]: I0130 14:19:51.600668 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" podStartSLOduration=2.109043182 podStartE2EDuration="2.600647775s" podCreationTimestamp="2026-01-30 14:19:49 +0000 UTC" firstStartedPulling="2026-01-30 14:19:50.524677089 +0000 UTC m=+2201.226025580" lastFinishedPulling="2026-01-30 14:19:51.016281682 +0000 UTC m=+2201.717630173" observedRunningTime="2026-01-30 14:19:51.5926418 +0000 UTC m=+2202.293990291" watchObservedRunningTime="2026-01-30 14:19:51.600647775 +0000 UTC m=+2202.301996266" Jan 30 14:20:22 crc kubenswrapper[4793]: I0130 14:20:22.053787 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-mrwzs"] Jan 30 14:20:22 crc kubenswrapper[4793]: I0130 14:20:22.061796 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-mrwzs"] Jan 30 14:20:22 crc kubenswrapper[4793]: I0130 14:20:22.410037 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33ed75d8-77f2-4c4d-b725-b703b8ce2980" path="/var/lib/kubelet/pods/33ed75d8-77f2-4c4d-b725-b703b8ce2980/volumes" Jan 30 14:20:31 crc kubenswrapper[4793]: I0130 14:20:31.969272 4793 generic.go:334] "Generic (PLEG): container finished" podID="1ee9c552-088f-4e61-961e-7062bf6e874b" containerID="caeb3293818ec051ac12e0602b0d244314fd25439754a9c03c0a1727737001ef" exitCode=0 Jan 30 14:20:31 crc kubenswrapper[4793]: I0130 14:20:31.969354 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" event={"ID":"1ee9c552-088f-4e61-961e-7062bf6e874b","Type":"ContainerDied","Data":"caeb3293818ec051ac12e0602b0d244314fd25439754a9c03c0a1727737001ef"} Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.427808 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.534964 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-ssh-key-openstack-edpm-ipam\") pod \"1ee9c552-088f-4e61-961e-7062bf6e874b\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.535492 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk8b5\" (UniqueName: \"kubernetes.io/projected/1ee9c552-088f-4e61-961e-7062bf6e874b-kube-api-access-rk8b5\") pod \"1ee9c552-088f-4e61-961e-7062bf6e874b\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.535686 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-inventory\") pod \"1ee9c552-088f-4e61-961e-7062bf6e874b\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.546417 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee9c552-088f-4e61-961e-7062bf6e874b-kube-api-access-rk8b5" (OuterVolumeSpecName: "kube-api-access-rk8b5") pod "1ee9c552-088f-4e61-961e-7062bf6e874b" (UID: "1ee9c552-088f-4e61-961e-7062bf6e874b"). InnerVolumeSpecName "kube-api-access-rk8b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.570496 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-inventory" (OuterVolumeSpecName: "inventory") pod "1ee9c552-088f-4e61-961e-7062bf6e874b" (UID: "1ee9c552-088f-4e61-961e-7062bf6e874b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.570795 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1ee9c552-088f-4e61-961e-7062bf6e874b" (UID: "1ee9c552-088f-4e61-961e-7062bf6e874b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.639244 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk8b5\" (UniqueName: \"kubernetes.io/projected/1ee9c552-088f-4e61-961e-7062bf6e874b-kube-api-access-rk8b5\") on node \"crc\" DevicePath \"\"" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.639283 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.639293 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.987498 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" event={"ID":"1ee9c552-088f-4e61-961e-7062bf6e874b","Type":"ContainerDied","Data":"198e531d99fd0bd9e1dbdbead68ffefc142e56214e16f99f17371a7795b85dcf"} Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.987534 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="198e531d99fd0bd9e1dbdbead68ffefc142e56214e16f99f17371a7795b85dcf" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.987582 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.076977 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2"] Jan 30 14:20:34 crc kubenswrapper[4793]: E0130 14:20:34.077393 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee9c552-088f-4e61-961e-7062bf6e874b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.077420 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee9c552-088f-4e61-961e-7062bf6e874b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.077673 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee9c552-088f-4e61-961e-7062bf6e874b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.078363 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.081291 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.081493 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.081702 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.082447 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.095645 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2"] Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.249901 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.250036 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.250260 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkb6d\" (UniqueName: \"kubernetes.io/projected/44f4e8fd-4511-4670-944a-e37dfc6238c8-kube-api-access-kkb6d\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.352535 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkb6d\" (UniqueName: \"kubernetes.io/projected/44f4e8fd-4511-4670-944a-e37dfc6238c8-kube-api-access-kkb6d\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.353000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.353252 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.357493 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.361750 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.380489 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkb6d\" (UniqueName: \"kubernetes.io/projected/44f4e8fd-4511-4670-944a-e37dfc6238c8-kube-api-access-kkb6d\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.395855 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:35 crc kubenswrapper[4793]: I0130 14:20:35.129815 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2"] Jan 30 14:20:36 crc kubenswrapper[4793]: I0130 14:20:36.003794 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" event={"ID":"44f4e8fd-4511-4670-944a-e37dfc6238c8","Type":"ContainerStarted","Data":"a9015e79c329eb72d41d603b294a22ae5d93178d8d2d64cf54528b6f45b377bf"} Jan 30 14:20:36 crc kubenswrapper[4793]: I0130 14:20:36.004159 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" event={"ID":"44f4e8fd-4511-4670-944a-e37dfc6238c8","Type":"ContainerStarted","Data":"fad95305628b0bb9ff4fbb99102a672ed83873978699983c18378fffedce3842"} Jan 30 14:20:36 crc kubenswrapper[4793]: I0130 14:20:36.028348 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" podStartSLOduration=1.648952768 podStartE2EDuration="2.028321338s" podCreationTimestamp="2026-01-30 14:20:34 +0000 UTC" firstStartedPulling="2026-01-30 14:20:35.141240431 +0000 UTC m=+2245.842588922" lastFinishedPulling="2026-01-30 14:20:35.520609001 +0000 UTC m=+2246.221957492" observedRunningTime="2026-01-30 14:20:36.015925425 +0000 UTC m=+2246.717273926" watchObservedRunningTime="2026-01-30 14:20:36.028321338 +0000 UTC m=+2246.729669859" Jan 30 14:20:42 crc kubenswrapper[4793]: I0130 14:20:42.413934 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:20:42 crc kubenswrapper[4793]: I0130 14:20:42.414505 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:20:50 crc kubenswrapper[4793]: I0130 14:20:50.810628 4793 scope.go:117] "RemoveContainer" containerID="596a656189ddb8dd9803e2c0c8dc2a8724dea1aee86c92cab0644fce8e091c80" Jan 30 14:21:12 crc kubenswrapper[4793]: I0130 14:21:12.413301 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:21:12 crc kubenswrapper[4793]: I0130 14:21:12.414269 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.691299 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7dr4h"] Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.693736 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.694951 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcvg4\" (UniqueName: \"kubernetes.io/projected/abef0532-bda8-460d-80b9-c4e44ce7f68e-kube-api-access-tcvg4\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.695293 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-catalog-content\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.695417 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-utilities\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.706551 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dr4h"] Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.798123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcvg4\" (UniqueName: \"kubernetes.io/projected/abef0532-bda8-460d-80b9-c4e44ce7f68e-kube-api-access-tcvg4\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.798263 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-catalog-content\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.798869 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-catalog-content\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.798954 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-utilities\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.799799 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-utilities\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.834066 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcvg4\" (UniqueName: \"kubernetes.io/projected/abef0532-bda8-460d-80b9-c4e44ce7f68e-kube-api-access-tcvg4\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:19 crc kubenswrapper[4793]: I0130 14:21:19.054068 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:19 crc kubenswrapper[4793]: I0130 14:21:19.606462 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dr4h"] Jan 30 14:21:20 crc kubenswrapper[4793]: I0130 14:21:20.357426 4793 generic.go:334] "Generic (PLEG): container finished" podID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerID="220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc" exitCode=0 Jan 30 14:21:20 crc kubenswrapper[4793]: I0130 14:21:20.357487 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerDied","Data":"220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc"} Jan 30 14:21:20 crc kubenswrapper[4793]: I0130 14:21:20.357718 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerStarted","Data":"071505cdc6018a0a16ae65f42adeffb4b74a81940f0091be45398cfd1a17cab6"} Jan 30 14:21:22 crc kubenswrapper[4793]: I0130 14:21:22.375591 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerStarted","Data":"dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65"} Jan 30 14:21:25 crc kubenswrapper[4793]: I0130 14:21:25.401395 4793 generic.go:334] "Generic (PLEG): container finished" podID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerID="dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65" exitCode=0 Jan 30 14:21:25 crc kubenswrapper[4793]: I0130 14:21:25.401480 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerDied","Data":"dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65"} Jan 30 14:21:25 crc kubenswrapper[4793]: I0130 14:21:25.404444 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:21:26 crc kubenswrapper[4793]: I0130 14:21:26.412947 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerStarted","Data":"7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395"} Jan 30 14:21:26 crc kubenswrapper[4793]: I0130 14:21:26.433684 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7dr4h" podStartSLOduration=2.929897624 podStartE2EDuration="8.433664339s" podCreationTimestamp="2026-01-30 14:21:18 +0000 UTC" firstStartedPulling="2026-01-30 14:21:20.359235545 +0000 UTC m=+2291.060584036" lastFinishedPulling="2026-01-30 14:21:25.86300227 +0000 UTC m=+2296.564350751" observedRunningTime="2026-01-30 14:21:26.428138984 +0000 UTC m=+2297.129487485" watchObservedRunningTime="2026-01-30 14:21:26.433664339 +0000 UTC m=+2297.135012830" Jan 30 14:21:29 crc kubenswrapper[4793]: I0130 14:21:29.054505 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:29 crc kubenswrapper[4793]: I0130 14:21:29.054844 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:29 crc kubenswrapper[4793]: I0130 14:21:29.107938 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:29 crc kubenswrapper[4793]: I0130 14:21:29.448039 4793 generic.go:334] "Generic (PLEG): container finished" podID="44f4e8fd-4511-4670-944a-e37dfc6238c8" containerID="a9015e79c329eb72d41d603b294a22ae5d93178d8d2d64cf54528b6f45b377bf" exitCode=0 Jan 30 14:21:29 crc kubenswrapper[4793]: I0130 14:21:29.448102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" event={"ID":"44f4e8fd-4511-4670-944a-e37dfc6238c8","Type":"ContainerDied","Data":"a9015e79c329eb72d41d603b294a22ae5d93178d8d2d64cf54528b6f45b377bf"} Jan 30 14:21:30 crc kubenswrapper[4793]: I0130 14:21:30.964418 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.123521 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-inventory\") pod \"44f4e8fd-4511-4670-944a-e37dfc6238c8\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.123895 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-ssh-key-openstack-edpm-ipam\") pod \"44f4e8fd-4511-4670-944a-e37dfc6238c8\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.124022 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkb6d\" (UniqueName: \"kubernetes.io/projected/44f4e8fd-4511-4670-944a-e37dfc6238c8-kube-api-access-kkb6d\") pod \"44f4e8fd-4511-4670-944a-e37dfc6238c8\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.130171 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f4e8fd-4511-4670-944a-e37dfc6238c8-kube-api-access-kkb6d" (OuterVolumeSpecName: "kube-api-access-kkb6d") pod "44f4e8fd-4511-4670-944a-e37dfc6238c8" (UID: "44f4e8fd-4511-4670-944a-e37dfc6238c8"). InnerVolumeSpecName "kube-api-access-kkb6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.151699 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "44f4e8fd-4511-4670-944a-e37dfc6238c8" (UID: "44f4e8fd-4511-4670-944a-e37dfc6238c8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.164884 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-inventory" (OuterVolumeSpecName: "inventory") pod "44f4e8fd-4511-4670-944a-e37dfc6238c8" (UID: "44f4e8fd-4511-4670-944a-e37dfc6238c8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.226846 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.226899 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.226916 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkb6d\" (UniqueName: \"kubernetes.io/projected/44f4e8fd-4511-4670-944a-e37dfc6238c8-kube-api-access-kkb6d\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.463739 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" event={"ID":"44f4e8fd-4511-4670-944a-e37dfc6238c8","Type":"ContainerDied","Data":"fad95305628b0bb9ff4fbb99102a672ed83873978699983c18378fffedce3842"} Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.463779 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fad95305628b0bb9ff4fbb99102a672ed83873978699983c18378fffedce3842" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.463864 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.584259 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nlncv"] Jan 30 14:21:31 crc kubenswrapper[4793]: E0130 14:21:31.588487 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f4e8fd-4511-4670-944a-e37dfc6238c8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.588599 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f4e8fd-4511-4670-944a-e37dfc6238c8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.589003 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="44f4e8fd-4511-4670-944a-e37dfc6238c8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.590003 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.600245 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.600610 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.600695 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.601172 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.616471 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nlncv"] Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.736302 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.736386 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.736449 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s29z\" (UniqueName: \"kubernetes.io/projected/3cad1dbc-effe-48d8-af45-df0a45e16783-kube-api-access-2s29z\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.838001 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.838089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.838119 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s29z\" (UniqueName: \"kubernetes.io/projected/3cad1dbc-effe-48d8-af45-df0a45e16783-kube-api-access-2s29z\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.853205 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.853476 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.854663 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s29z\" (UniqueName: \"kubernetes.io/projected/3cad1dbc-effe-48d8-af45-df0a45e16783-kube-api-access-2s29z\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.917126 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:32 crc kubenswrapper[4793]: I0130 14:21:32.438530 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nlncv"] Jan 30 14:21:32 crc kubenswrapper[4793]: I0130 14:21:32.474029 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" event={"ID":"3cad1dbc-effe-48d8-af45-df0a45e16783","Type":"ContainerStarted","Data":"07909ff107f4055891d6e17429bccfc51538043329feda79f63c9ffa07efd7fc"} Jan 30 14:21:33 crc kubenswrapper[4793]: I0130 14:21:33.486328 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" event={"ID":"3cad1dbc-effe-48d8-af45-df0a45e16783","Type":"ContainerStarted","Data":"cb9a5c92d49ff68631aafe317707ea0d2062de92795fb0e86959969982b5b945"} Jan 30 14:21:33 crc kubenswrapper[4793]: I0130 14:21:33.510003 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" podStartSLOduration=1.857994865 podStartE2EDuration="2.509976596s" podCreationTimestamp="2026-01-30 14:21:31 +0000 UTC" firstStartedPulling="2026-01-30 14:21:32.445329455 +0000 UTC m=+2303.146677956" lastFinishedPulling="2026-01-30 14:21:33.097311196 +0000 UTC m=+2303.798659687" observedRunningTime="2026-01-30 14:21:33.503257042 +0000 UTC m=+2304.204605533" watchObservedRunningTime="2026-01-30 14:21:33.509976596 +0000 UTC m=+2304.211325087" Jan 30 14:21:39 crc kubenswrapper[4793]: I0130 14:21:39.106983 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:39 crc kubenswrapper[4793]: I0130 14:21:39.823935 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7dr4h"] Jan 30 14:21:39 crc kubenswrapper[4793]: I0130 14:21:39.824487 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7dr4h" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="registry-server" containerID="cri-o://7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395" gracePeriod=2 Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.480435 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.554149 4793 generic.go:334] "Generic (PLEG): container finished" podID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerID="7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395" exitCode=0 Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.554186 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerDied","Data":"7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395"} Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.554210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerDied","Data":"071505cdc6018a0a16ae65f42adeffb4b74a81940f0091be45398cfd1a17cab6"} Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.554226 4793 scope.go:117] "RemoveContainer" containerID="7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.554349 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.579147 4793 scope.go:117] "RemoveContainer" containerID="dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.604255 4793 scope.go:117] "RemoveContainer" containerID="220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.621147 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-catalog-content\") pod \"abef0532-bda8-460d-80b9-c4e44ce7f68e\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.621297 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcvg4\" (UniqueName: \"kubernetes.io/projected/abef0532-bda8-460d-80b9-c4e44ce7f68e-kube-api-access-tcvg4\") pod \"abef0532-bda8-460d-80b9-c4e44ce7f68e\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.621447 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-utilities\") pod \"abef0532-bda8-460d-80b9-c4e44ce7f68e\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.622444 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-utilities" (OuterVolumeSpecName: "utilities") pod "abef0532-bda8-460d-80b9-c4e44ce7f68e" (UID: "abef0532-bda8-460d-80b9-c4e44ce7f68e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.627871 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abef0532-bda8-460d-80b9-c4e44ce7f68e-kube-api-access-tcvg4" (OuterVolumeSpecName: "kube-api-access-tcvg4") pod "abef0532-bda8-460d-80b9-c4e44ce7f68e" (UID: "abef0532-bda8-460d-80b9-c4e44ce7f68e"). InnerVolumeSpecName "kube-api-access-tcvg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.677854 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abef0532-bda8-460d-80b9-c4e44ce7f68e" (UID: "abef0532-bda8-460d-80b9-c4e44ce7f68e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.692983 4793 scope.go:117] "RemoveContainer" containerID="7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395" Jan 30 14:21:40 crc kubenswrapper[4793]: E0130 14:21:40.693754 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395\": container with ID starting with 7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395 not found: ID does not exist" containerID="7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.693784 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395"} err="failed to get container status \"7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395\": rpc error: code = NotFound desc = could not find container \"7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395\": container with ID starting with 7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395 not found: ID does not exist" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.693804 4793 scope.go:117] "RemoveContainer" containerID="dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65" Jan 30 14:21:40 crc kubenswrapper[4793]: E0130 14:21:40.694098 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65\": container with ID starting with dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65 not found: ID does not exist" containerID="dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.694130 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65"} err="failed to get container status \"dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65\": rpc error: code = NotFound desc = could not find container \"dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65\": container with ID starting with dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65 not found: ID does not exist" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.694144 4793 scope.go:117] "RemoveContainer" containerID="220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc" Jan 30 14:21:40 crc kubenswrapper[4793]: E0130 14:21:40.694627 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc\": container with ID starting with 220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc not found: ID does not exist" containerID="220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.694694 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc"} err="failed to get container status \"220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc\": rpc error: code = NotFound desc = could not find container \"220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc\": container with ID starting with 220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc not found: ID does not exist" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.723234 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.723446 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.723507 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcvg4\" (UniqueName: \"kubernetes.io/projected/abef0532-bda8-460d-80b9-c4e44ce7f68e-kube-api-access-tcvg4\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.888313 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7dr4h"] Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.896551 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7dr4h"] Jan 30 14:21:41 crc kubenswrapper[4793]: I0130 14:21:41.563623 4793 generic.go:334] "Generic (PLEG): container finished" podID="3cad1dbc-effe-48d8-af45-df0a45e16783" containerID="cb9a5c92d49ff68631aafe317707ea0d2062de92795fb0e86959969982b5b945" exitCode=0 Jan 30 14:21:41 crc kubenswrapper[4793]: I0130 14:21:41.563926 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" event={"ID":"3cad1dbc-effe-48d8-af45-df0a45e16783","Type":"ContainerDied","Data":"cb9a5c92d49ff68631aafe317707ea0d2062de92795fb0e86959969982b5b945"} Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.413402 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.413458 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.414521 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" path="/var/lib/kubelet/pods/abef0532-bda8-460d-80b9-c4e44ce7f68e/volumes" Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.416271 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.417853 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.418004 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" gracePeriod=600 Jan 30 14:21:42 crc kubenswrapper[4793]: E0130 14:21:42.548144 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.584065 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" exitCode=0 Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.584268 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19"} Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.584309 4793 scope.go:117] "RemoveContainer" containerID="c7109bad76c4800462c715a31fed08fa68ade41549aa0ee47196c92cb6ec6f9c" Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.584916 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:21:42 crc kubenswrapper[4793]: E0130 14:21:42.585231 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.069207 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.168892 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-ssh-key-openstack-edpm-ipam\") pod \"3cad1dbc-effe-48d8-af45-df0a45e16783\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.168958 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s29z\" (UniqueName: \"kubernetes.io/projected/3cad1dbc-effe-48d8-af45-df0a45e16783-kube-api-access-2s29z\") pod \"3cad1dbc-effe-48d8-af45-df0a45e16783\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.169077 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-inventory-0\") pod \"3cad1dbc-effe-48d8-af45-df0a45e16783\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.174978 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cad1dbc-effe-48d8-af45-df0a45e16783-kube-api-access-2s29z" (OuterVolumeSpecName: "kube-api-access-2s29z") pod "3cad1dbc-effe-48d8-af45-df0a45e16783" (UID: "3cad1dbc-effe-48d8-af45-df0a45e16783"). InnerVolumeSpecName "kube-api-access-2s29z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.200874 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "3cad1dbc-effe-48d8-af45-df0a45e16783" (UID: "3cad1dbc-effe-48d8-af45-df0a45e16783"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.208326 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3cad1dbc-effe-48d8-af45-df0a45e16783" (UID: "3cad1dbc-effe-48d8-af45-df0a45e16783"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.270636 4793 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.270693 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.270710 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s29z\" (UniqueName: \"kubernetes.io/projected/3cad1dbc-effe-48d8-af45-df0a45e16783-kube-api-access-2s29z\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.595879 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" event={"ID":"3cad1dbc-effe-48d8-af45-df0a45e16783","Type":"ContainerDied","Data":"07909ff107f4055891d6e17429bccfc51538043329feda79f63c9ffa07efd7fc"} Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.595943 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07909ff107f4055891d6e17429bccfc51538043329feda79f63c9ffa07efd7fc" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.596002 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.677944 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58"] Jan 30 14:21:43 crc kubenswrapper[4793]: E0130 14:21:43.678371 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="registry-server" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.678389 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="registry-server" Jan 30 14:21:43 crc kubenswrapper[4793]: E0130 14:21:43.678414 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="extract-content" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.678420 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="extract-content" Jan 30 14:21:43 crc kubenswrapper[4793]: E0130 14:21:43.678435 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cad1dbc-effe-48d8-af45-df0a45e16783" containerName="ssh-known-hosts-edpm-deployment" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.678443 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cad1dbc-effe-48d8-af45-df0a45e16783" containerName="ssh-known-hosts-edpm-deployment" Jan 30 14:21:43 crc kubenswrapper[4793]: E0130 14:21:43.678467 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="extract-utilities" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.678475 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="extract-utilities" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.678671 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="registry-server" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.678690 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cad1dbc-effe-48d8-af45-df0a45e16783" containerName="ssh-known-hosts-edpm-deployment" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.679393 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.689189 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.689367 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.689427 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.694872 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.705521 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58"] Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.779553 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb45b\" (UniqueName: \"kubernetes.io/projected/7915ec77-ca16-4f23-a367-42b525c80284-kube-api-access-tb45b\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.779620 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.779648 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.881023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb45b\" (UniqueName: \"kubernetes.io/projected/7915ec77-ca16-4f23-a367-42b525c80284-kube-api-access-tb45b\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.881115 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.881256 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.887197 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.892928 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.900587 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb45b\" (UniqueName: \"kubernetes.io/projected/7915ec77-ca16-4f23-a367-42b525c80284-kube-api-access-tb45b\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.996119 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:44 crc kubenswrapper[4793]: I0130 14:21:44.716008 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58"] Jan 30 14:21:45 crc kubenswrapper[4793]: I0130 14:21:45.628247 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" event={"ID":"7915ec77-ca16-4f23-a367-42b525c80284","Type":"ContainerStarted","Data":"d4a75b71a6f08d7e1ae63d9f7e8be9b4c3fd94122dc13efb955e3a3da657f8ea"} Jan 30 14:21:45 crc kubenswrapper[4793]: I0130 14:21:45.628610 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" event={"ID":"7915ec77-ca16-4f23-a367-42b525c80284","Type":"ContainerStarted","Data":"5056c89f893c22f6d895f2db21ec550d28feaa74d141a03d37334d3db4ad6603"} Jan 30 14:21:45 crc kubenswrapper[4793]: I0130 14:21:45.649704 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" podStartSLOduration=2.204764276 podStartE2EDuration="2.649683473s" podCreationTimestamp="2026-01-30 14:21:43 +0000 UTC" firstStartedPulling="2026-01-30 14:21:44.726785143 +0000 UTC m=+2315.428133634" lastFinishedPulling="2026-01-30 14:21:45.17170434 +0000 UTC m=+2315.873052831" observedRunningTime="2026-01-30 14:21:45.648574695 +0000 UTC m=+2316.349923196" watchObservedRunningTime="2026-01-30 14:21:45.649683473 +0000 UTC m=+2316.351031964" Jan 30 14:21:53 crc kubenswrapper[4793]: I0130 14:21:53.753078 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" event={"ID":"7915ec77-ca16-4f23-a367-42b525c80284","Type":"ContainerDied","Data":"d4a75b71a6f08d7e1ae63d9f7e8be9b4c3fd94122dc13efb955e3a3da657f8ea"} Jan 30 14:21:53 crc kubenswrapper[4793]: I0130 14:21:53.753038 4793 generic.go:334] "Generic (PLEG): container finished" podID="7915ec77-ca16-4f23-a367-42b525c80284" containerID="d4a75b71a6f08d7e1ae63d9f7e8be9b4c3fd94122dc13efb955e3a3da657f8ea" exitCode=0 Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.203945 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.266253 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb45b\" (UniqueName: \"kubernetes.io/projected/7915ec77-ca16-4f23-a367-42b525c80284-kube-api-access-tb45b\") pod \"7915ec77-ca16-4f23-a367-42b525c80284\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.266335 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-inventory\") pod \"7915ec77-ca16-4f23-a367-42b525c80284\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.266371 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-ssh-key-openstack-edpm-ipam\") pod \"7915ec77-ca16-4f23-a367-42b525c80284\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.272203 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7915ec77-ca16-4f23-a367-42b525c80284-kube-api-access-tb45b" (OuterVolumeSpecName: "kube-api-access-tb45b") pod "7915ec77-ca16-4f23-a367-42b525c80284" (UID: "7915ec77-ca16-4f23-a367-42b525c80284"). InnerVolumeSpecName "kube-api-access-tb45b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.291641 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7915ec77-ca16-4f23-a367-42b525c80284" (UID: "7915ec77-ca16-4f23-a367-42b525c80284"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.298949 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-inventory" (OuterVolumeSpecName: "inventory") pod "7915ec77-ca16-4f23-a367-42b525c80284" (UID: "7915ec77-ca16-4f23-a367-42b525c80284"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.375267 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb45b\" (UniqueName: \"kubernetes.io/projected/7915ec77-ca16-4f23-a367-42b525c80284-kube-api-access-tb45b\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.375316 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.375332 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.780298 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" event={"ID":"7915ec77-ca16-4f23-a367-42b525c80284","Type":"ContainerDied","Data":"5056c89f893c22f6d895f2db21ec550d28feaa74d141a03d37334d3db4ad6603"} Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.780363 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5056c89f893c22f6d895f2db21ec550d28feaa74d141a03d37334d3db4ad6603" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.780470 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.875925 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7"] Jan 30 14:21:55 crc kubenswrapper[4793]: E0130 14:21:55.876298 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7915ec77-ca16-4f23-a367-42b525c80284" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.876314 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7915ec77-ca16-4f23-a367-42b525c80284" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.876490 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7915ec77-ca16-4f23-a367-42b525c80284" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.877115 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.881950 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.895535 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.895734 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.895884 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.928582 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7"] Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.998064 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.998164 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp2cn\" (UniqueName: \"kubernetes.io/projected/0538b501-a861-4302-b26e-f5cfb17ed62a-kube-api-access-gp2cn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.998425 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.100267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.100365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp2cn\" (UniqueName: \"kubernetes.io/projected/0538b501-a861-4302-b26e-f5cfb17ed62a-kube-api-access-gp2cn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.100456 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.106904 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.107112 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.147399 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp2cn\" (UniqueName: \"kubernetes.io/projected/0538b501-a861-4302-b26e-f5cfb17ed62a-kube-api-access-gp2cn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.258363 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.787646 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7"] Jan 30 14:21:57 crc kubenswrapper[4793]: I0130 14:21:57.398323 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:21:57 crc kubenswrapper[4793]: E0130 14:21:57.398836 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:21:57 crc kubenswrapper[4793]: I0130 14:21:57.796424 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" event={"ID":"0538b501-a861-4302-b26e-f5cfb17ed62a","Type":"ContainerStarted","Data":"fea5e63393f75f4b613c43ceaa8d48b3e7349e45486c106589b512deedfb7172"} Jan 30 14:21:57 crc kubenswrapper[4793]: I0130 14:21:57.796480 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" event={"ID":"0538b501-a861-4302-b26e-f5cfb17ed62a","Type":"ContainerStarted","Data":"bd05c803b7c5cfaa753e46947ba4a87a5c66eb51717cb996ce4f28515a85e28e"} Jan 30 14:21:57 crc kubenswrapper[4793]: I0130 14:21:57.830974 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" podStartSLOduration=2.396983111 podStartE2EDuration="2.830953881s" podCreationTimestamp="2026-01-30 14:21:55 +0000 UTC" firstStartedPulling="2026-01-30 14:21:56.78893083 +0000 UTC m=+2327.490279321" lastFinishedPulling="2026-01-30 14:21:57.22290158 +0000 UTC m=+2327.924250091" observedRunningTime="2026-01-30 14:21:57.828417349 +0000 UTC m=+2328.529765880" watchObservedRunningTime="2026-01-30 14:21:57.830953881 +0000 UTC m=+2328.532302372" Jan 30 14:22:07 crc kubenswrapper[4793]: I0130 14:22:07.884810 4793 generic.go:334] "Generic (PLEG): container finished" podID="0538b501-a861-4302-b26e-f5cfb17ed62a" containerID="fea5e63393f75f4b613c43ceaa8d48b3e7349e45486c106589b512deedfb7172" exitCode=0 Jan 30 14:22:07 crc kubenswrapper[4793]: I0130 14:22:07.884911 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" event={"ID":"0538b501-a861-4302-b26e-f5cfb17ed62a","Type":"ContainerDied","Data":"fea5e63393f75f4b613c43ceaa8d48b3e7349e45486c106589b512deedfb7172"} Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.297409 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.385680 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp2cn\" (UniqueName: \"kubernetes.io/projected/0538b501-a861-4302-b26e-f5cfb17ed62a-kube-api-access-gp2cn\") pod \"0538b501-a861-4302-b26e-f5cfb17ed62a\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.386107 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-ssh-key-openstack-edpm-ipam\") pod \"0538b501-a861-4302-b26e-f5cfb17ed62a\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.386410 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-inventory\") pod \"0538b501-a861-4302-b26e-f5cfb17ed62a\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.391853 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0538b501-a861-4302-b26e-f5cfb17ed62a-kube-api-access-gp2cn" (OuterVolumeSpecName: "kube-api-access-gp2cn") pod "0538b501-a861-4302-b26e-f5cfb17ed62a" (UID: "0538b501-a861-4302-b26e-f5cfb17ed62a"). InnerVolumeSpecName "kube-api-access-gp2cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.398892 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:22:09 crc kubenswrapper[4793]: E0130 14:22:09.399301 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.412955 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0538b501-a861-4302-b26e-f5cfb17ed62a" (UID: "0538b501-a861-4302-b26e-f5cfb17ed62a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.413492 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-inventory" (OuterVolumeSpecName: "inventory") pod "0538b501-a861-4302-b26e-f5cfb17ed62a" (UID: "0538b501-a861-4302-b26e-f5cfb17ed62a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.490753 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp2cn\" (UniqueName: \"kubernetes.io/projected/0538b501-a861-4302-b26e-f5cfb17ed62a-kube-api-access-gp2cn\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.490781 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.490791 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.902262 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" event={"ID":"0538b501-a861-4302-b26e-f5cfb17ed62a","Type":"ContainerDied","Data":"bd05c803b7c5cfaa753e46947ba4a87a5c66eb51717cb996ce4f28515a85e28e"} Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.902579 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd05c803b7c5cfaa753e46947ba4a87a5c66eb51717cb996ce4f28515a85e28e" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.902320 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.011175 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp"] Jan 30 14:22:10 crc kubenswrapper[4793]: E0130 14:22:10.011886 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0538b501-a861-4302-b26e-f5cfb17ed62a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.011995 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0538b501-a861-4302-b26e-f5cfb17ed62a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.012457 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0538b501-a861-4302-b26e-f5cfb17ed62a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.013348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019068 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019116 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019131 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019341 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019420 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019528 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019889 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.020179 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.023772 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp"] Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.165675 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.165737 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.165780 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.165817 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.165841 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.165981 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166107 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166262 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2t4t\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-kube-api-access-d2t4t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166318 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166356 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166398 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166474 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166524 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166568 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.267695 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.267946 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268065 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268219 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268309 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2t4t\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-kube-api-access-d2t4t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268380 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268456 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268525 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268605 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268676 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268747 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268816 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268918 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.272806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.273151 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.274417 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.275831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.277087 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.277857 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.279175 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.279659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.281899 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.282088 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.282881 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.283113 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.285658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.289439 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2t4t\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-kube-api-access-d2t4t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.482517 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.491094 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:11 crc kubenswrapper[4793]: I0130 14:22:11.035469 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp"] Jan 30 14:22:11 crc kubenswrapper[4793]: I0130 14:22:11.537234 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:22:11 crc kubenswrapper[4793]: I0130 14:22:11.922426 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" event={"ID":"ae4f8964-b104-43bb-8356-bb53a9635527","Type":"ContainerStarted","Data":"89c8d9f7344ea357868d402178be5ed38d7a7f8c40ac7b30aa3adfa7292331e3"} Jan 30 14:22:11 crc kubenswrapper[4793]: I0130 14:22:11.922802 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" event={"ID":"ae4f8964-b104-43bb-8356-bb53a9635527","Type":"ContainerStarted","Data":"0da6000fbf46068f349a91eb8f524a9b8122da198bebf7d03c6e4893fda58193"} Jan 30 14:22:11 crc kubenswrapper[4793]: I0130 14:22:11.958953 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" podStartSLOduration=2.467887825 podStartE2EDuration="2.958933876s" podCreationTimestamp="2026-01-30 14:22:09 +0000 UTC" firstStartedPulling="2026-01-30 14:22:11.043696303 +0000 UTC m=+2341.745044794" lastFinishedPulling="2026-01-30 14:22:11.534742354 +0000 UTC m=+2342.236090845" observedRunningTime="2026-01-30 14:22:11.94594558 +0000 UTC m=+2342.647294121" watchObservedRunningTime="2026-01-30 14:22:11.958933876 +0000 UTC m=+2342.660282387" Jan 30 14:22:24 crc kubenswrapper[4793]: I0130 14:22:24.399261 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:22:24 crc kubenswrapper[4793]: E0130 14:22:24.402062 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:22:36 crc kubenswrapper[4793]: I0130 14:22:36.398382 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:22:36 crc kubenswrapper[4793]: E0130 14:22:36.399362 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:22:50 crc kubenswrapper[4793]: I0130 14:22:50.398631 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:22:50 crc kubenswrapper[4793]: E0130 14:22:50.399765 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:22:51 crc kubenswrapper[4793]: I0130 14:22:51.249007 4793 generic.go:334] "Generic (PLEG): container finished" podID="ae4f8964-b104-43bb-8356-bb53a9635527" containerID="89c8d9f7344ea357868d402178be5ed38d7a7f8c40ac7b30aa3adfa7292331e3" exitCode=0 Jan 30 14:22:51 crc kubenswrapper[4793]: I0130 14:22:51.249200 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" event={"ID":"ae4f8964-b104-43bb-8356-bb53a9635527","Type":"ContainerDied","Data":"89c8d9f7344ea357868d402178be5ed38d7a7f8c40ac7b30aa3adfa7292331e3"} Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.654842 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.693733 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2t4t\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-kube-api-access-d2t4t\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.693790 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-bootstrap-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.693812 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-neutron-metadata-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.693893 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.693943 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-inventory\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.693978 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-libvirt-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694022 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-telemetry-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694066 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694104 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ovn-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694144 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ssh-key-openstack-edpm-ipam\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694164 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-repo-setup-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694195 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-nova-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694243 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-ovn-default-certs-0\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694277 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.705007 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-kube-api-access-d2t4t" (OuterVolumeSpecName: "kube-api-access-d2t4t") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "kube-api-access-d2t4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.705930 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.711397 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.712853 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.713909 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.714847 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.717377 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.717822 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.718349 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.718474 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.722943 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.733307 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.739455 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.741598 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-inventory" (OuterVolumeSpecName: "inventory") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.796685 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.796894 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797010 4793 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797130 4793 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797224 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797313 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797395 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2t4t\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-kube-api-access-d2t4t\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797473 4793 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797558 4793 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797641 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797730 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797807 4793 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797879 4793 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797958 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.272787 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" event={"ID":"ae4f8964-b104-43bb-8356-bb53a9635527","Type":"ContainerDied","Data":"0da6000fbf46068f349a91eb8f524a9b8122da198bebf7d03c6e4893fda58193"} Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.273151 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0da6000fbf46068f349a91eb8f524a9b8122da198bebf7d03c6e4893fda58193" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.272887 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.394847 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7"] Jan 30 14:22:53 crc kubenswrapper[4793]: E0130 14:22:53.397195 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae4f8964-b104-43bb-8356-bb53a9635527" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.397242 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae4f8964-b104-43bb-8356-bb53a9635527" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.397813 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae4f8964-b104-43bb-8356-bb53a9635527" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.398700 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.401732 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.401760 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.402105 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.403527 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7"] Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.405632 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.405814 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.513175 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.513254 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.513296 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rrtv\" (UniqueName: \"kubernetes.io/projected/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-kube-api-access-7rrtv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.513333 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.513382 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.615608 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.615670 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rrtv\" (UniqueName: \"kubernetes.io/projected/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-kube-api-access-7rrtv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.615703 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.615739 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.615835 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.616589 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.620318 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.621692 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.622125 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.666740 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rrtv\" (UniqueName: \"kubernetes.io/projected/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-kube-api-access-7rrtv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.714315 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:54 crc kubenswrapper[4793]: I0130 14:22:54.295310 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7"] Jan 30 14:22:55 crc kubenswrapper[4793]: I0130 14:22:55.307314 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" event={"ID":"dbd66148-cdd0-4e92-9601-3ef1576a5d3f","Type":"ContainerStarted","Data":"219da4f20d3a98a397a408028d5a88362d19486413272faf80a42261aca02884"} Jan 30 14:22:55 crc kubenswrapper[4793]: I0130 14:22:55.307965 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" event={"ID":"dbd66148-cdd0-4e92-9601-3ef1576a5d3f","Type":"ContainerStarted","Data":"062659d165e41463074a05fd5501629453876dd6ce5b9a5b154ed6ee90613d8f"} Jan 30 14:23:03 crc kubenswrapper[4793]: I0130 14:23:03.398483 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:23:03 crc kubenswrapper[4793]: E0130 14:23:03.399432 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:23:14 crc kubenswrapper[4793]: I0130 14:23:14.398396 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:23:14 crc kubenswrapper[4793]: E0130 14:23:14.399494 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:23:25 crc kubenswrapper[4793]: I0130 14:23:25.398649 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:23:25 crc kubenswrapper[4793]: E0130 14:23:25.399553 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:23:37 crc kubenswrapper[4793]: I0130 14:23:37.399840 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:23:37 crc kubenswrapper[4793]: E0130 14:23:37.401397 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:23:49 crc kubenswrapper[4793]: I0130 14:23:49.398948 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:23:49 crc kubenswrapper[4793]: E0130 14:23:49.400160 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:23:50 crc kubenswrapper[4793]: I0130 14:23:50.948711 4793 scope.go:117] "RemoveContainer" containerID="87ada9a6b5346c7032748aa17aea82f42d27a30601825dfb46499a4bfb7bf949" Jan 30 14:23:50 crc kubenswrapper[4793]: I0130 14:23:50.979640 4793 scope.go:117] "RemoveContainer" containerID="97e00f686b282180edd4c6895080d4ff4fea6b3dd37684dbd36be6025541ffd0" Jan 30 14:23:51 crc kubenswrapper[4793]: I0130 14:23:51.063886 4793 scope.go:117] "RemoveContainer" containerID="085807c590a6db119c8b09a9c636c0a0db1e0e333c8a025332a79e249f76032c" Jan 30 14:24:00 crc kubenswrapper[4793]: I0130 14:24:00.410970 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:24:00 crc kubenswrapper[4793]: E0130 14:24:00.411457 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:24:02 crc kubenswrapper[4793]: I0130 14:24:02.898736 4793 generic.go:334] "Generic (PLEG): container finished" podID="dbd66148-cdd0-4e92-9601-3ef1576a5d3f" containerID="219da4f20d3a98a397a408028d5a88362d19486413272faf80a42261aca02884" exitCode=0 Jan 30 14:24:02 crc kubenswrapper[4793]: I0130 14:24:02.898830 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" event={"ID":"dbd66148-cdd0-4e92-9601-3ef1576a5d3f","Type":"ContainerDied","Data":"219da4f20d3a98a397a408028d5a88362d19486413272faf80a42261aca02884"} Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.414489 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.590434 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rrtv\" (UniqueName: \"kubernetes.io/projected/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-kube-api-access-7rrtv\") pod \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.590559 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovncontroller-config-0\") pod \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.590665 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovn-combined-ca-bundle\") pod \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.590806 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ssh-key-openstack-edpm-ipam\") pod \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.590887 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-inventory\") pod \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.597288 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-kube-api-access-7rrtv" (OuterVolumeSpecName: "kube-api-access-7rrtv") pod "dbd66148-cdd0-4e92-9601-3ef1576a5d3f" (UID: "dbd66148-cdd0-4e92-9601-3ef1576a5d3f"). InnerVolumeSpecName "kube-api-access-7rrtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.599214 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "dbd66148-cdd0-4e92-9601-3ef1576a5d3f" (UID: "dbd66148-cdd0-4e92-9601-3ef1576a5d3f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.613505 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "dbd66148-cdd0-4e92-9601-3ef1576a5d3f" (UID: "dbd66148-cdd0-4e92-9601-3ef1576a5d3f"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.616266 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dbd66148-cdd0-4e92-9601-3ef1576a5d3f" (UID: "dbd66148-cdd0-4e92-9601-3ef1576a5d3f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.620428 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-inventory" (OuterVolumeSpecName: "inventory") pod "dbd66148-cdd0-4e92-9601-3ef1576a5d3f" (UID: "dbd66148-cdd0-4e92-9601-3ef1576a5d3f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.693432 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.693481 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.693492 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rrtv\" (UniqueName: \"kubernetes.io/projected/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-kube-api-access-7rrtv\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.693503 4793 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.693514 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.921646 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" event={"ID":"dbd66148-cdd0-4e92-9601-3ef1576a5d3f","Type":"ContainerDied","Data":"062659d165e41463074a05fd5501629453876dd6ce5b9a5b154ed6ee90613d8f"} Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.921872 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="062659d165e41463074a05fd5501629453876dd6ce5b9a5b154ed6ee90613d8f" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.922384 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.094377 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk"] Jan 30 14:24:05 crc kubenswrapper[4793]: E0130 14:24:05.094805 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd66148-cdd0-4e92-9601-3ef1576a5d3f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.094823 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd66148-cdd0-4e92-9601-3ef1576a5d3f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.095002 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbd66148-cdd0-4e92-9601-3ef1576a5d3f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.095749 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.098545 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.099538 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.099784 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.099937 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.099962 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.103965 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.108363 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk"] Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.202970 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.203174 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.203267 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.203305 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.203331 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.203625 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkgcw\" (UniqueName: \"kubernetes.io/projected/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-kube-api-access-hkgcw\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.306740 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.306900 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.306970 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.307036 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.307250 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkgcw\" (UniqueName: \"kubernetes.io/projected/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-kube-api-access-hkgcw\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.307507 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.311437 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.311653 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.312100 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.313109 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.314288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.328196 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkgcw\" (UniqueName: \"kubernetes.io/projected/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-kube-api-access-hkgcw\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.438180 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:06 crc kubenswrapper[4793]: I0130 14:24:06.007821 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk"] Jan 30 14:24:06 crc kubenswrapper[4793]: I0130 14:24:06.954397 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" event={"ID":"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5","Type":"ContainerStarted","Data":"e55213f2fced3737de3fb3ff4602498a86b686ff3ab59fdf6509dddac24327d6"} Jan 30 14:24:07 crc kubenswrapper[4793]: I0130 14:24:07.966467 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" event={"ID":"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5","Type":"ContainerStarted","Data":"5885befe35927759b0d2ced1a2a1467580181cfae34c28239ea999f58e29a334"} Jan 30 14:24:08 crc kubenswrapper[4793]: I0130 14:24:08.001283 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" podStartSLOduration=2.210273821 podStartE2EDuration="3.001251148s" podCreationTimestamp="2026-01-30 14:24:05 +0000 UTC" firstStartedPulling="2026-01-30 14:24:06.00563902 +0000 UTC m=+2456.706987521" lastFinishedPulling="2026-01-30 14:24:06.796616337 +0000 UTC m=+2457.497964848" observedRunningTime="2026-01-30 14:24:07.987743506 +0000 UTC m=+2458.689092007" watchObservedRunningTime="2026-01-30 14:24:08.001251148 +0000 UTC m=+2458.702599659" Jan 30 14:24:12 crc kubenswrapper[4793]: I0130 14:24:12.398472 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:24:12 crc kubenswrapper[4793]: E0130 14:24:12.400459 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:24:27 crc kubenswrapper[4793]: I0130 14:24:27.398999 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:24:27 crc kubenswrapper[4793]: E0130 14:24:27.399726 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:24:39 crc kubenswrapper[4793]: I0130 14:24:39.398579 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:24:39 crc kubenswrapper[4793]: E0130 14:24:39.399502 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:24:52 crc kubenswrapper[4793]: I0130 14:24:52.398865 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:24:52 crc kubenswrapper[4793]: E0130 14:24:52.399663 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:24:57 crc kubenswrapper[4793]: I0130 14:24:57.409075 4793 generic.go:334] "Generic (PLEG): container finished" podID="92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" containerID="5885befe35927759b0d2ced1a2a1467580181cfae34c28239ea999f58e29a334" exitCode=0 Jan 30 14:24:57 crc kubenswrapper[4793]: I0130 14:24:57.409167 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" event={"ID":"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5","Type":"ContainerDied","Data":"5885befe35927759b0d2ced1a2a1467580181cfae34c28239ea999f58e29a334"} Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.819410 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.918717 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkgcw\" (UniqueName: \"kubernetes.io/projected/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-kube-api-access-hkgcw\") pod \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.919814 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-metadata-combined-ca-bundle\") pod \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.919847 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-ssh-key-openstack-edpm-ipam\") pod \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.919923 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-inventory\") pod \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.919963 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-nova-metadata-neutron-config-0\") pod \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.919986 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.924975 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-kube-api-access-hkgcw" (OuterVolumeSpecName: "kube-api-access-hkgcw") pod "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" (UID: "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5"). InnerVolumeSpecName "kube-api-access-hkgcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.930315 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" (UID: "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.950834 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" (UID: "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.953233 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-inventory" (OuterVolumeSpecName: "inventory") pod "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" (UID: "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.954551 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" (UID: "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.958902 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" (UID: "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.022809 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.022847 4793 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.022866 4793 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.022880 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkgcw\" (UniqueName: \"kubernetes.io/projected/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-kube-api-access-hkgcw\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.022895 4793 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.022909 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.428462 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" event={"ID":"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5","Type":"ContainerDied","Data":"e55213f2fced3737de3fb3ff4602498a86b686ff3ab59fdf6509dddac24327d6"} Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.428559 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e55213f2fced3737de3fb3ff4602498a86b686ff3ab59fdf6509dddac24327d6" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.428581 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.553625 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2"] Jan 30 14:24:59 crc kubenswrapper[4793]: E0130 14:24:59.554162 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.554188 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.554419 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.555242 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.558759 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.559169 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.559454 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.559578 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.564770 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.571489 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2"] Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.634761 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.634830 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.634921 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.634960 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.634993 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5pnk\" (UniqueName: \"kubernetes.io/projected/96926233-9ce4-4a0b-bab4-d0c4fa90389b-kube-api-access-k5pnk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.737148 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.737234 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.737325 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.737388 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.737432 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5pnk\" (UniqueName: \"kubernetes.io/projected/96926233-9ce4-4a0b-bab4-d0c4fa90389b-kube-api-access-k5pnk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.742498 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.743103 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.745818 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.752618 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.756529 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5pnk\" (UniqueName: \"kubernetes.io/projected/96926233-9ce4-4a0b-bab4-d0c4fa90389b-kube-api-access-k5pnk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.890168 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:25:00 crc kubenswrapper[4793]: I0130 14:25:00.510962 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2"] Jan 30 14:25:01 crc kubenswrapper[4793]: I0130 14:25:01.458515 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" event={"ID":"96926233-9ce4-4a0b-bab4-d0c4fa90389b","Type":"ContainerStarted","Data":"61f0898c6128b3026d78cf3afa09780d7e497bed3bbd093ccb7f3ad49150e91f"} Jan 30 14:25:01 crc kubenswrapper[4793]: I0130 14:25:01.458563 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" event={"ID":"96926233-9ce4-4a0b-bab4-d0c4fa90389b","Type":"ContainerStarted","Data":"0bf138472118ab1f44e112f736372179f055ce03bbf973e33b87d18006a030f8"} Jan 30 14:25:01 crc kubenswrapper[4793]: I0130 14:25:01.474308 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" podStartSLOduration=1.974009822 podStartE2EDuration="2.474292974s" podCreationTimestamp="2026-01-30 14:24:59 +0000 UTC" firstStartedPulling="2026-01-30 14:25:00.522896189 +0000 UTC m=+2511.224244680" lastFinishedPulling="2026-01-30 14:25:01.023179311 +0000 UTC m=+2511.724527832" observedRunningTime="2026-01-30 14:25:01.473536466 +0000 UTC m=+2512.174884957" watchObservedRunningTime="2026-01-30 14:25:01.474292974 +0000 UTC m=+2512.175641465" Jan 30 14:25:06 crc kubenswrapper[4793]: I0130 14:25:06.401127 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:25:06 crc kubenswrapper[4793]: E0130 14:25:06.402670 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:25:19 crc kubenswrapper[4793]: I0130 14:25:19.399460 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:25:19 crc kubenswrapper[4793]: E0130 14:25:19.400338 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:25:33 crc kubenswrapper[4793]: I0130 14:25:33.398448 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:25:33 crc kubenswrapper[4793]: E0130 14:25:33.399361 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:25:48 crc kubenswrapper[4793]: I0130 14:25:48.399317 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:25:48 crc kubenswrapper[4793]: E0130 14:25:48.400207 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:26:01 crc kubenswrapper[4793]: I0130 14:26:01.398116 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:26:01 crc kubenswrapper[4793]: E0130 14:26:01.398822 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:26:13 crc kubenswrapper[4793]: I0130 14:26:13.399641 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:26:13 crc kubenswrapper[4793]: E0130 14:26:13.401036 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:26:27 crc kubenswrapper[4793]: I0130 14:26:27.398100 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:26:27 crc kubenswrapper[4793]: E0130 14:26:27.399989 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:26:39 crc kubenswrapper[4793]: I0130 14:26:39.399154 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:26:39 crc kubenswrapper[4793]: E0130 14:26:39.399926 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:26:50 crc kubenswrapper[4793]: I0130 14:26:50.406830 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:26:51 crc kubenswrapper[4793]: I0130 14:26:51.446922 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"70fb244a70a270db2f48a61c7b2320a4725cc48ffb5d0786cb6f3e83b0333e57"} Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.066710 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j9zsb"] Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.070525 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.084221 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j9zsb"] Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.172686 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfkbw\" (UniqueName: \"kubernetes.io/projected/8ac188e0-8883-4288-8574-a8388bea78d2-kube-api-access-qfkbw\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.172812 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-catalog-content\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.172863 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-utilities\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.274239 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-catalog-content\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.274314 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-utilities\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.274389 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfkbw\" (UniqueName: \"kubernetes.io/projected/8ac188e0-8883-4288-8574-a8388bea78d2-kube-api-access-qfkbw\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.275027 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-utilities\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.275031 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-catalog-content\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.296185 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfkbw\" (UniqueName: \"kubernetes.io/projected/8ac188e0-8883-4288-8574-a8388bea78d2-kube-api-access-qfkbw\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.386769 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:30 crc kubenswrapper[4793]: I0130 14:27:30.068065 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j9zsb"] Jan 30 14:27:30 crc kubenswrapper[4793]: I0130 14:27:30.808019 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ac188e0-8883-4288-8574-a8388bea78d2" containerID="3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59" exitCode=0 Jan 30 14:27:30 crc kubenswrapper[4793]: I0130 14:27:30.808783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerDied","Data":"3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59"} Jan 30 14:27:30 crc kubenswrapper[4793]: I0130 14:27:30.808949 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerStarted","Data":"2e6978349422c2c067899ffd8f2d73652f6c4e68208717f0207feab345d75662"} Jan 30 14:27:30 crc kubenswrapper[4793]: I0130 14:27:30.810649 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:27:31 crc kubenswrapper[4793]: I0130 14:27:31.819613 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerStarted","Data":"5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa"} Jan 30 14:27:35 crc kubenswrapper[4793]: I0130 14:27:35.872002 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ac188e0-8883-4288-8574-a8388bea78d2" containerID="5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa" exitCode=0 Jan 30 14:27:35 crc kubenswrapper[4793]: I0130 14:27:35.872793 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerDied","Data":"5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa"} Jan 30 14:27:36 crc kubenswrapper[4793]: I0130 14:27:36.884859 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerStarted","Data":"e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7"} Jan 30 14:27:36 crc kubenswrapper[4793]: I0130 14:27:36.919278 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j9zsb" podStartSLOduration=2.349947378 podStartE2EDuration="7.919237533s" podCreationTimestamp="2026-01-30 14:27:29 +0000 UTC" firstStartedPulling="2026-01-30 14:27:30.810271427 +0000 UTC m=+2661.511619928" lastFinishedPulling="2026-01-30 14:27:36.379561552 +0000 UTC m=+2667.080910083" observedRunningTime="2026-01-30 14:27:36.908813047 +0000 UTC m=+2667.610161558" watchObservedRunningTime="2026-01-30 14:27:36.919237533 +0000 UTC m=+2667.620586034" Jan 30 14:27:39 crc kubenswrapper[4793]: I0130 14:27:39.387397 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:39 crc kubenswrapper[4793]: I0130 14:27:39.387841 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:39 crc kubenswrapper[4793]: I0130 14:27:39.444558 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:49 crc kubenswrapper[4793]: I0130 14:27:49.436717 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:49 crc kubenswrapper[4793]: I0130 14:27:49.500164 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j9zsb"] Jan 30 14:27:49 crc kubenswrapper[4793]: I0130 14:27:49.998185 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j9zsb" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="registry-server" containerID="cri-o://e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7" gracePeriod=2 Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.445558 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.453511 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-utilities\") pod \"8ac188e0-8883-4288-8574-a8388bea78d2\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.453580 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfkbw\" (UniqueName: \"kubernetes.io/projected/8ac188e0-8883-4288-8574-a8388bea78d2-kube-api-access-qfkbw\") pod \"8ac188e0-8883-4288-8574-a8388bea78d2\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.453686 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-catalog-content\") pod \"8ac188e0-8883-4288-8574-a8388bea78d2\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.454416 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-utilities" (OuterVolumeSpecName: "utilities") pod "8ac188e0-8883-4288-8574-a8388bea78d2" (UID: "8ac188e0-8883-4288-8574-a8388bea78d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.468305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac188e0-8883-4288-8574-a8388bea78d2-kube-api-access-qfkbw" (OuterVolumeSpecName: "kube-api-access-qfkbw") pod "8ac188e0-8883-4288-8574-a8388bea78d2" (UID: "8ac188e0-8883-4288-8574-a8388bea78d2"). InnerVolumeSpecName "kube-api-access-qfkbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.517963 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ac188e0-8883-4288-8574-a8388bea78d2" (UID: "8ac188e0-8883-4288-8574-a8388bea78d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.556622 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.556655 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.556670 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfkbw\" (UniqueName: \"kubernetes.io/projected/8ac188e0-8883-4288-8574-a8388bea78d2-kube-api-access-qfkbw\") on node \"crc\" DevicePath \"\"" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.013276 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ac188e0-8883-4288-8574-a8388bea78d2" containerID="e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7" exitCode=0 Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.013334 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerDied","Data":"e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7"} Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.013379 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerDied","Data":"2e6978349422c2c067899ffd8f2d73652f6c4e68208717f0207feab345d75662"} Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.013403 4793 scope.go:117] "RemoveContainer" containerID="e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.013425 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.039844 4793 scope.go:117] "RemoveContainer" containerID="5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.055560 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j9zsb"] Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.062510 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j9zsb"] Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.064730 4793 scope.go:117] "RemoveContainer" containerID="3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.113935 4793 scope.go:117] "RemoveContainer" containerID="e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7" Jan 30 14:27:51 crc kubenswrapper[4793]: E0130 14:27:51.114387 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7\": container with ID starting with e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7 not found: ID does not exist" containerID="e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.114429 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7"} err="failed to get container status \"e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7\": rpc error: code = NotFound desc = could not find container \"e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7\": container with ID starting with e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7 not found: ID does not exist" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.114457 4793 scope.go:117] "RemoveContainer" containerID="5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa" Jan 30 14:27:51 crc kubenswrapper[4793]: E0130 14:27:51.114736 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa\": container with ID starting with 5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa not found: ID does not exist" containerID="5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.114771 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa"} err="failed to get container status \"5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa\": rpc error: code = NotFound desc = could not find container \"5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa\": container with ID starting with 5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa not found: ID does not exist" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.114791 4793 scope.go:117] "RemoveContainer" containerID="3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59" Jan 30 14:27:51 crc kubenswrapper[4793]: E0130 14:27:51.115223 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59\": container with ID starting with 3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59 not found: ID does not exist" containerID="3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.115248 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59"} err="failed to get container status \"3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59\": rpc error: code = NotFound desc = could not find container \"3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59\": container with ID starting with 3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59 not found: ID does not exist" Jan 30 14:27:52 crc kubenswrapper[4793]: I0130 14:27:52.408887 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" path="/var/lib/kubelet/pods/8ac188e0-8883-4288-8574-a8388bea78d2/volumes" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.725062 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tcg6z"] Jan 30 14:27:53 crc kubenswrapper[4793]: E0130 14:27:53.725693 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="extract-utilities" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.725705 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="extract-utilities" Jan 30 14:27:53 crc kubenswrapper[4793]: E0130 14:27:53.725727 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="extract-content" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.725733 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="extract-content" Jan 30 14:27:53 crc kubenswrapper[4793]: E0130 14:27:53.725752 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="registry-server" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.725759 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="registry-server" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.725990 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="registry-server" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.727385 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.751021 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tcg6z"] Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.923715 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-catalog-content\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.924099 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-utilities\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.924193 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drbh7\" (UniqueName: \"kubernetes.io/projected/2248feb5-b64e-4fbc-8993-7d6e69082932-kube-api-access-drbh7\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.026370 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-utilities\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.026431 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drbh7\" (UniqueName: \"kubernetes.io/projected/2248feb5-b64e-4fbc-8993-7d6e69082932-kube-api-access-drbh7\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.026504 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-catalog-content\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.026969 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-utilities\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.027129 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-catalog-content\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.052015 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drbh7\" (UniqueName: \"kubernetes.io/projected/2248feb5-b64e-4fbc-8993-7d6e69082932-kube-api-access-drbh7\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.346414 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.719345 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tcg6z"] Jan 30 14:27:55 crc kubenswrapper[4793]: I0130 14:27:55.052273 4793 generic.go:334] "Generic (PLEG): container finished" podID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerID="2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0" exitCode=0 Jan 30 14:27:55 crc kubenswrapper[4793]: I0130 14:27:55.052400 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerDied","Data":"2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0"} Jan 30 14:27:55 crc kubenswrapper[4793]: I0130 14:27:55.052587 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerStarted","Data":"c693d9182095ee36e51a7a2bd725bebc76ec6dfb2df0b81b55aa8de3f6cfa553"} Jan 30 14:27:56 crc kubenswrapper[4793]: I0130 14:27:56.062093 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerStarted","Data":"dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169"} Jan 30 14:28:08 crc kubenswrapper[4793]: I0130 14:28:08.181723 4793 generic.go:334] "Generic (PLEG): container finished" podID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerID="dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169" exitCode=0 Jan 30 14:28:08 crc kubenswrapper[4793]: I0130 14:28:08.181764 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerDied","Data":"dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169"} Jan 30 14:28:12 crc kubenswrapper[4793]: I0130 14:28:12.222385 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerStarted","Data":"63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b"} Jan 30 14:28:12 crc kubenswrapper[4793]: I0130 14:28:12.248128 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tcg6z" podStartSLOduration=2.794861712 podStartE2EDuration="19.248109189s" podCreationTimestamp="2026-01-30 14:27:53 +0000 UTC" firstStartedPulling="2026-01-30 14:27:55.053724415 +0000 UTC m=+2685.755072906" lastFinishedPulling="2026-01-30 14:28:11.506971882 +0000 UTC m=+2702.208320383" observedRunningTime="2026-01-30 14:28:12.246784226 +0000 UTC m=+2702.948132727" watchObservedRunningTime="2026-01-30 14:28:12.248109189 +0000 UTC m=+2702.949457680" Jan 30 14:28:14 crc kubenswrapper[4793]: I0130 14:28:14.346801 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:28:14 crc kubenswrapper[4793]: I0130 14:28:14.347318 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:28:15 crc kubenswrapper[4793]: I0130 14:28:15.388699 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tcg6z" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="registry-server" probeResult="failure" output=< Jan 30 14:28:15 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:28:15 crc kubenswrapper[4793]: > Jan 30 14:28:24 crc kubenswrapper[4793]: I0130 14:28:24.412281 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:28:24 crc kubenswrapper[4793]: I0130 14:28:24.480391 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:28:24 crc kubenswrapper[4793]: I0130 14:28:24.928947 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tcg6z"] Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.336367 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tcg6z" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="registry-server" containerID="cri-o://63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b" gracePeriod=2 Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.797916 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.938523 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drbh7\" (UniqueName: \"kubernetes.io/projected/2248feb5-b64e-4fbc-8993-7d6e69082932-kube-api-access-drbh7\") pod \"2248feb5-b64e-4fbc-8993-7d6e69082932\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.938979 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-utilities\") pod \"2248feb5-b64e-4fbc-8993-7d6e69082932\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.939094 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-catalog-content\") pod \"2248feb5-b64e-4fbc-8993-7d6e69082932\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.941784 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-utilities" (OuterVolumeSpecName: "utilities") pod "2248feb5-b64e-4fbc-8993-7d6e69082932" (UID: "2248feb5-b64e-4fbc-8993-7d6e69082932"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.951364 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2248feb5-b64e-4fbc-8993-7d6e69082932-kube-api-access-drbh7" (OuterVolumeSpecName: "kube-api-access-drbh7") pod "2248feb5-b64e-4fbc-8993-7d6e69082932" (UID: "2248feb5-b64e-4fbc-8993-7d6e69082932"). InnerVolumeSpecName "kube-api-access-drbh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.041325 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.041371 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drbh7\" (UniqueName: \"kubernetes.io/projected/2248feb5-b64e-4fbc-8993-7d6e69082932-kube-api-access-drbh7\") on node \"crc\" DevicePath \"\"" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.064537 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2248feb5-b64e-4fbc-8993-7d6e69082932" (UID: "2248feb5-b64e-4fbc-8993-7d6e69082932"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.143148 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.354632 4793 generic.go:334] "Generic (PLEG): container finished" podID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerID="63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b" exitCode=0 Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.354698 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerDied","Data":"63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b"} Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.354723 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.354755 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerDied","Data":"c693d9182095ee36e51a7a2bd725bebc76ec6dfb2df0b81b55aa8de3f6cfa553"} Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.354779 4793 scope.go:117] "RemoveContainer" containerID="63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.385156 4793 scope.go:117] "RemoveContainer" containerID="dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.413005 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tcg6z"] Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.430458 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tcg6z"] Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.437989 4793 scope.go:117] "RemoveContainer" containerID="2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.486794 4793 scope.go:117] "RemoveContainer" containerID="63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b" Jan 30 14:28:27 crc kubenswrapper[4793]: E0130 14:28:27.487422 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b\": container with ID starting with 63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b not found: ID does not exist" containerID="63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.487472 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b"} err="failed to get container status \"63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b\": rpc error: code = NotFound desc = could not find container \"63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b\": container with ID starting with 63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b not found: ID does not exist" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.487508 4793 scope.go:117] "RemoveContainer" containerID="dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169" Jan 30 14:28:27 crc kubenswrapper[4793]: E0130 14:28:27.489358 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169\": container with ID starting with dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169 not found: ID does not exist" containerID="dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.489389 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169"} err="failed to get container status \"dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169\": rpc error: code = NotFound desc = could not find container \"dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169\": container with ID starting with dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169 not found: ID does not exist" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.489409 4793 scope.go:117] "RemoveContainer" containerID="2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0" Jan 30 14:28:27 crc kubenswrapper[4793]: E0130 14:28:27.489756 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0\": container with ID starting with 2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0 not found: ID does not exist" containerID="2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.489787 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0"} err="failed to get container status \"2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0\": rpc error: code = NotFound desc = could not find container \"2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0\": container with ID starting with 2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0 not found: ID does not exist" Jan 30 14:28:28 crc kubenswrapper[4793]: I0130 14:28:28.410242 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" path="/var/lib/kubelet/pods/2248feb5-b64e-4fbc-8993-7d6e69082932/volumes" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.796469 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wt9"] Jan 30 14:28:40 crc kubenswrapper[4793]: E0130 14:28:40.797542 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="registry-server" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.797568 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="registry-server" Jan 30 14:28:40 crc kubenswrapper[4793]: E0130 14:28:40.797598 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="extract-utilities" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.797608 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="extract-utilities" Jan 30 14:28:40 crc kubenswrapper[4793]: E0130 14:28:40.797638 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="extract-content" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.797647 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="extract-content" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.797930 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="registry-server" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.799845 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.819177 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wt9"] Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.919232 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-utilities\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.919546 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-catalog-content\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.919608 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qm26\" (UniqueName: \"kubernetes.io/projected/c78dc643-5d9a-4998-a1a2-2a1992eaad88-kube-api-access-8qm26\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.021619 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-utilities\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.021765 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-catalog-content\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.021789 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qm26\" (UniqueName: \"kubernetes.io/projected/c78dc643-5d9a-4998-a1a2-2a1992eaad88-kube-api-access-8qm26\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.022311 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-utilities\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.022387 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-catalog-content\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.041728 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qm26\" (UniqueName: \"kubernetes.io/projected/c78dc643-5d9a-4998-a1a2-2a1992eaad88-kube-api-access-8qm26\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.124195 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.641338 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wt9"] Jan 30 14:28:42 crc kubenswrapper[4793]: I0130 14:28:42.486482 4793 generic.go:334] "Generic (PLEG): container finished" podID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerID="d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb" exitCode=0 Jan 30 14:28:42 crc kubenswrapper[4793]: I0130 14:28:42.486839 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerDied","Data":"d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb"} Jan 30 14:28:42 crc kubenswrapper[4793]: I0130 14:28:42.486872 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerStarted","Data":"bd05cc44721911bb54d243d9cfe6e7c414c9830e172625313e31e6fa71a99d40"} Jan 30 14:28:43 crc kubenswrapper[4793]: I0130 14:28:43.503945 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerStarted","Data":"d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8"} Jan 30 14:28:44 crc kubenswrapper[4793]: I0130 14:28:44.514256 4793 generic.go:334] "Generic (PLEG): container finished" podID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerID="d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8" exitCode=0 Jan 30 14:28:44 crc kubenswrapper[4793]: I0130 14:28:44.514607 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerDied","Data":"d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8"} Jan 30 14:28:45 crc kubenswrapper[4793]: I0130 14:28:45.524241 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerStarted","Data":"e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905"} Jan 30 14:28:45 crc kubenswrapper[4793]: I0130 14:28:45.571976 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q7wt9" podStartSLOduration=3.184672034 podStartE2EDuration="5.571953439s" podCreationTimestamp="2026-01-30 14:28:40 +0000 UTC" firstStartedPulling="2026-01-30 14:28:42.488687729 +0000 UTC m=+2733.190036220" lastFinishedPulling="2026-01-30 14:28:44.875969124 +0000 UTC m=+2735.577317625" observedRunningTime="2026-01-30 14:28:45.54800113 +0000 UTC m=+2736.249349631" watchObservedRunningTime="2026-01-30 14:28:45.571953439 +0000 UTC m=+2736.273301930" Jan 30 14:28:51 crc kubenswrapper[4793]: I0130 14:28:51.124490 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:51 crc kubenswrapper[4793]: I0130 14:28:51.125038 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:51 crc kubenswrapper[4793]: I0130 14:28:51.174245 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:51 crc kubenswrapper[4793]: I0130 14:28:51.626746 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:51 crc kubenswrapper[4793]: I0130 14:28:51.684439 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wt9"] Jan 30 14:28:53 crc kubenswrapper[4793]: I0130 14:28:53.597671 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q7wt9" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="registry-server" containerID="cri-o://e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905" gracePeriod=2 Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.551570 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.583418 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qm26\" (UniqueName: \"kubernetes.io/projected/c78dc643-5d9a-4998-a1a2-2a1992eaad88-kube-api-access-8qm26\") pod \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.583491 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-catalog-content\") pod \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.583538 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-utilities\") pod \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.584797 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-utilities" (OuterVolumeSpecName: "utilities") pod "c78dc643-5d9a-4998-a1a2-2a1992eaad88" (UID: "c78dc643-5d9a-4998-a1a2-2a1992eaad88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.608291 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c78dc643-5d9a-4998-a1a2-2a1992eaad88" (UID: "c78dc643-5d9a-4998-a1a2-2a1992eaad88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.616651 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c78dc643-5d9a-4998-a1a2-2a1992eaad88-kube-api-access-8qm26" (OuterVolumeSpecName: "kube-api-access-8qm26") pod "c78dc643-5d9a-4998-a1a2-2a1992eaad88" (UID: "c78dc643-5d9a-4998-a1a2-2a1992eaad88"). InnerVolumeSpecName "kube-api-access-8qm26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.663835 4793 generic.go:334] "Generic (PLEG): container finished" podID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerID="e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905" exitCode=0 Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.663878 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerDied","Data":"e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905"} Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.663905 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerDied","Data":"bd05cc44721911bb54d243d9cfe6e7c414c9830e172625313e31e6fa71a99d40"} Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.663933 4793 scope.go:117] "RemoveContainer" containerID="e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.664157 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.686414 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.686664 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.686746 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qm26\" (UniqueName: \"kubernetes.io/projected/c78dc643-5d9a-4998-a1a2-2a1992eaad88-kube-api-access-8qm26\") on node \"crc\" DevicePath \"\"" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.714016 4793 scope.go:117] "RemoveContainer" containerID="d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.715664 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wt9"] Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.727342 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wt9"] Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.738140 4793 scope.go:117] "RemoveContainer" containerID="d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.789031 4793 scope.go:117] "RemoveContainer" containerID="e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905" Jan 30 14:28:54 crc kubenswrapper[4793]: E0130 14:28:54.792449 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905\": container with ID starting with e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905 not found: ID does not exist" containerID="e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.792666 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905"} err="failed to get container status \"e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905\": rpc error: code = NotFound desc = could not find container \"e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905\": container with ID starting with e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905 not found: ID does not exist" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.792783 4793 scope.go:117] "RemoveContainer" containerID="d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8" Jan 30 14:28:54 crc kubenswrapper[4793]: E0130 14:28:54.793487 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8\": container with ID starting with d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8 not found: ID does not exist" containerID="d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.793522 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8"} err="failed to get container status \"d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8\": rpc error: code = NotFound desc = could not find container \"d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8\": container with ID starting with d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8 not found: ID does not exist" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.793539 4793 scope.go:117] "RemoveContainer" containerID="d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb" Jan 30 14:28:54 crc kubenswrapper[4793]: E0130 14:28:54.795177 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb\": container with ID starting with d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb not found: ID does not exist" containerID="d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.795277 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb"} err="failed to get container status \"d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb\": rpc error: code = NotFound desc = could not find container \"d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb\": container with ID starting with d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb not found: ID does not exist" Jan 30 14:28:56 crc kubenswrapper[4793]: I0130 14:28:56.407829 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" path="/var/lib/kubelet/pods/c78dc643-5d9a-4998-a1a2-2a1992eaad88/volumes" Jan 30 14:29:12 crc kubenswrapper[4793]: I0130 14:29:12.413721 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:29:12 crc kubenswrapper[4793]: I0130 14:29:12.414004 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:29:39 crc kubenswrapper[4793]: I0130 14:29:39.439854 4793 generic.go:334] "Generic (PLEG): container finished" podID="96926233-9ce4-4a0b-bab4-d0c4fa90389b" containerID="61f0898c6128b3026d78cf3afa09780d7e497bed3bbd093ccb7f3ad49150e91f" exitCode=0 Jan 30 14:29:39 crc kubenswrapper[4793]: I0130 14:29:39.440494 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" event={"ID":"96926233-9ce4-4a0b-bab4-d0c4fa90389b","Type":"ContainerDied","Data":"61f0898c6128b3026d78cf3afa09780d7e497bed3bbd093ccb7f3ad49150e91f"} Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.848903 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.979833 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-combined-ca-bundle\") pod \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.980111 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-secret-0\") pod \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.980240 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5pnk\" (UniqueName: \"kubernetes.io/projected/96926233-9ce4-4a0b-bab4-d0c4fa90389b-kube-api-access-k5pnk\") pod \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.980291 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-inventory\") pod \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.980392 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-ssh-key-openstack-edpm-ipam\") pod \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.986568 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96926233-9ce4-4a0b-bab4-d0c4fa90389b-kube-api-access-k5pnk" (OuterVolumeSpecName: "kube-api-access-k5pnk") pod "96926233-9ce4-4a0b-bab4-d0c4fa90389b" (UID: "96926233-9ce4-4a0b-bab4-d0c4fa90389b"). InnerVolumeSpecName "kube-api-access-k5pnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.992463 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "96926233-9ce4-4a0b-bab4-d0c4fa90389b" (UID: "96926233-9ce4-4a0b-bab4-d0c4fa90389b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.014704 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96926233-9ce4-4a0b-bab4-d0c4fa90389b" (UID: "96926233-9ce4-4a0b-bab4-d0c4fa90389b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.016222 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "96926233-9ce4-4a0b-bab4-d0c4fa90389b" (UID: "96926233-9ce4-4a0b-bab4-d0c4fa90389b"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.026298 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-inventory" (OuterVolumeSpecName: "inventory") pod "96926233-9ce4-4a0b-bab4-d0c4fa90389b" (UID: "96926233-9ce4-4a0b-bab4-d0c4fa90389b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.082946 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5pnk\" (UniqueName: \"kubernetes.io/projected/96926233-9ce4-4a0b-bab4-d0c4fa90389b-kube-api-access-k5pnk\") on node \"crc\" DevicePath \"\"" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.082995 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.083013 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.083026 4793 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.083060 4793 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.464683 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.464547 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" event={"ID":"96926233-9ce4-4a0b-bab4-d0c4fa90389b","Type":"ContainerDied","Data":"0bf138472118ab1f44e112f736372179f055ce03bbf973e33b87d18006a030f8"} Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.465544 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bf138472118ab1f44e112f736372179f055ce03bbf973e33b87d18006a030f8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.558588 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8"] Jan 30 14:29:41 crc kubenswrapper[4793]: E0130 14:29:41.558986 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96926233-9ce4-4a0b-bab4-d0c4fa90389b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.559006 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="96926233-9ce4-4a0b-bab4-d0c4fa90389b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 14:29:41 crc kubenswrapper[4793]: E0130 14:29:41.559040 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="extract-content" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.559064 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="extract-content" Jan 30 14:29:41 crc kubenswrapper[4793]: E0130 14:29:41.559078 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="extract-utilities" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.559086 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="extract-utilities" Jan 30 14:29:41 crc kubenswrapper[4793]: E0130 14:29:41.559110 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="registry-server" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.559117 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="registry-server" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.559388 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="96926233-9ce4-4a0b-bab4-d0c4fa90389b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.559410 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="registry-server" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.560754 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.568566 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.568606 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.568566 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.569080 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.569198 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.569241 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.569495 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.575004 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8"] Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.711874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4bsc\" (UniqueName: \"kubernetes.io/projected/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-kube-api-access-c4bsc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.711947 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712122 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712160 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712202 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712346 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712483 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.814816 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.814982 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.815025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.816023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.816125 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.816275 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.816596 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4bsc\" (UniqueName: \"kubernetes.io/projected/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-kube-api-access-c4bsc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.816760 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.817015 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.818591 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.821592 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.822487 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.822759 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.824192 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.824923 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.825417 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.834212 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.842246 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4bsc\" (UniqueName: \"kubernetes.io/projected/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-kube-api-access-c4bsc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.890453 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:42 crc kubenswrapper[4793]: I0130 14:29:42.414098 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:29:42 crc kubenswrapper[4793]: I0130 14:29:42.414439 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:29:42 crc kubenswrapper[4793]: I0130 14:29:42.460622 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8"] Jan 30 14:29:42 crc kubenswrapper[4793]: I0130 14:29:42.475330 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" event={"ID":"dfc4d2ba-0414-4f1e-8733-a75d39218ef8","Type":"ContainerStarted","Data":"35c08494f8afe2508d0796d2d7916a60b01429d9956705b3e7cc36e86561fae0"} Jan 30 14:29:43 crc kubenswrapper[4793]: I0130 14:29:43.486646 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" event={"ID":"dfc4d2ba-0414-4f1e-8733-a75d39218ef8","Type":"ContainerStarted","Data":"5e41fdf863829756b00ca7e86cc571728bb392f0583e10c4de618e692db88093"} Jan 30 14:29:43 crc kubenswrapper[4793]: I0130 14:29:43.520512 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" podStartSLOduration=2.082065362 podStartE2EDuration="2.520493733s" podCreationTimestamp="2026-01-30 14:29:41 +0000 UTC" firstStartedPulling="2026-01-30 14:29:42.469085759 +0000 UTC m=+2793.170434250" lastFinishedPulling="2026-01-30 14:29:42.90751413 +0000 UTC m=+2793.608862621" observedRunningTime="2026-01-30 14:29:43.510553969 +0000 UTC m=+2794.211902460" watchObservedRunningTime="2026-01-30 14:29:43.520493733 +0000 UTC m=+2794.221842224" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.151282 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn"] Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.153825 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.156401 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.156613 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.170852 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn"] Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.314010 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afd3a15c-5ed4-45be-8091-84573a97a63a-config-volume\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.314090 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/afd3a15c-5ed4-45be-8091-84573a97a63a-kube-api-access-fqzmm\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.314201 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afd3a15c-5ed4-45be-8091-84573a97a63a-secret-volume\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.416100 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afd3a15c-5ed4-45be-8091-84573a97a63a-config-volume\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.416148 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/afd3a15c-5ed4-45be-8091-84573a97a63a-kube-api-access-fqzmm\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.416214 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afd3a15c-5ed4-45be-8091-84573a97a63a-secret-volume\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.417561 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afd3a15c-5ed4-45be-8091-84573a97a63a-config-volume\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.425080 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afd3a15c-5ed4-45be-8091-84573a97a63a-secret-volume\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.439085 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/afd3a15c-5ed4-45be-8091-84573a97a63a-kube-api-access-fqzmm\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.474735 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.956314 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn"] Jan 30 14:30:01 crc kubenswrapper[4793]: I0130 14:30:01.701579 4793 generic.go:334] "Generic (PLEG): container finished" podID="afd3a15c-5ed4-45be-8091-84573a97a63a" containerID="1def2597602a7873d34fb216db52e7e4d4963d5b5a3ca0e36a14a7576a9a797f" exitCode=0 Jan 30 14:30:01 crc kubenswrapper[4793]: I0130 14:30:01.701668 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" event={"ID":"afd3a15c-5ed4-45be-8091-84573a97a63a","Type":"ContainerDied","Data":"1def2597602a7873d34fb216db52e7e4d4963d5b5a3ca0e36a14a7576a9a797f"} Jan 30 14:30:01 crc kubenswrapper[4793]: I0130 14:30:01.701900 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" event={"ID":"afd3a15c-5ed4-45be-8091-84573a97a63a","Type":"ContainerStarted","Data":"d1bd11fd8a9e4e05f7c7410583f802caafc51abcd39d08a49ce8f8afd4d84643"} Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.036488 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.084165 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/afd3a15c-5ed4-45be-8091-84573a97a63a-kube-api-access-fqzmm\") pod \"afd3a15c-5ed4-45be-8091-84573a97a63a\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.084245 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afd3a15c-5ed4-45be-8091-84573a97a63a-secret-volume\") pod \"afd3a15c-5ed4-45be-8091-84573a97a63a\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.084551 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afd3a15c-5ed4-45be-8091-84573a97a63a-config-volume\") pod \"afd3a15c-5ed4-45be-8091-84573a97a63a\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.085569 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd3a15c-5ed4-45be-8091-84573a97a63a-config-volume" (OuterVolumeSpecName: "config-volume") pod "afd3a15c-5ed4-45be-8091-84573a97a63a" (UID: "afd3a15c-5ed4-45be-8091-84573a97a63a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.091670 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd3a15c-5ed4-45be-8091-84573a97a63a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "afd3a15c-5ed4-45be-8091-84573a97a63a" (UID: "afd3a15c-5ed4-45be-8091-84573a97a63a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.091908 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd3a15c-5ed4-45be-8091-84573a97a63a-kube-api-access-fqzmm" (OuterVolumeSpecName: "kube-api-access-fqzmm") pod "afd3a15c-5ed4-45be-8091-84573a97a63a" (UID: "afd3a15c-5ed4-45be-8091-84573a97a63a"). InnerVolumeSpecName "kube-api-access-fqzmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.186979 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/afd3a15c-5ed4-45be-8091-84573a97a63a-kube-api-access-fqzmm\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.187023 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afd3a15c-5ed4-45be-8091-84573a97a63a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.187033 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afd3a15c-5ed4-45be-8091-84573a97a63a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.720835 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" event={"ID":"afd3a15c-5ed4-45be-8091-84573a97a63a","Type":"ContainerDied","Data":"d1bd11fd8a9e4e05f7c7410583f802caafc51abcd39d08a49ce8f8afd4d84643"} Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.721505 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1bd11fd8a9e4e05f7c7410583f802caafc51abcd39d08a49ce8f8afd4d84643" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.721585 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:04 crc kubenswrapper[4793]: I0130 14:30:04.123737 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7"] Jan 30 14:30:04 crc kubenswrapper[4793]: I0130 14:30:04.131458 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7"] Jan 30 14:30:04 crc kubenswrapper[4793]: I0130 14:30:04.420831 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db0dcc6-874c-40f9-a0b7-309149c78f48" path="/var/lib/kubelet/pods/6db0dcc6-874c-40f9-a0b7-309149c78f48/volumes" Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.413438 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.414231 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.414300 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.415587 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70fb244a70a270db2f48a61c7b2320a4725cc48ffb5d0786cb6f3e83b0333e57"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.415746 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://70fb244a70a270db2f48a61c7b2320a4725cc48ffb5d0786cb6f3e83b0333e57" gracePeriod=600 Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.801364 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="70fb244a70a270db2f48a61c7b2320a4725cc48ffb5d0786cb6f3e83b0333e57" exitCode=0 Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.801439 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"70fb244a70a270db2f48a61c7b2320a4725cc48ffb5d0786cb6f3e83b0333e57"} Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.801720 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff"} Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.801745 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:30:51 crc kubenswrapper[4793]: I0130 14:30:51.294426 4793 scope.go:117] "RemoveContainer" containerID="0003a0f96b0d450dcabcfae0a5907ebc6be8013da3e854ca4f0bce212cb173a6" Jan 30 14:30:57 crc kubenswrapper[4793]: I0130 14:30:57.831510 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.479582 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jtvvp"] Jan 30 14:31:45 crc kubenswrapper[4793]: E0130 14:31:45.480544 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd3a15c-5ed4-45be-8091-84573a97a63a" containerName="collect-profiles" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.480559 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd3a15c-5ed4-45be-8091-84573a97a63a" containerName="collect-profiles" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.480828 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd3a15c-5ed4-45be-8091-84573a97a63a" containerName="collect-profiles" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.482600 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.496781 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jtvvp"] Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.558339 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-catalog-content\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.558596 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-utilities\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.558642 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6lq9\" (UniqueName: \"kubernetes.io/projected/741a3bc2-86fb-4c08-9403-71f9900d2685-kube-api-access-h6lq9\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.660735 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-catalog-content\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.660821 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-utilities\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.660882 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6lq9\" (UniqueName: \"kubernetes.io/projected/741a3bc2-86fb-4c08-9403-71f9900d2685-kube-api-access-h6lq9\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.661322 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-catalog-content\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.661405 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-utilities\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.690768 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6lq9\" (UniqueName: \"kubernetes.io/projected/741a3bc2-86fb-4c08-9403-71f9900d2685-kube-api-access-h6lq9\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.806079 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:46 crc kubenswrapper[4793]: I0130 14:31:46.449502 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jtvvp"] Jan 30 14:31:46 crc kubenswrapper[4793]: I0130 14:31:46.648529 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerStarted","Data":"6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c"} Jan 30 14:31:46 crc kubenswrapper[4793]: I0130 14:31:46.648845 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerStarted","Data":"8c884434f855c40a03540f0ffa1d304bd12ee3704e243d46631a685f83a6e054"} Jan 30 14:31:47 crc kubenswrapper[4793]: I0130 14:31:47.659499 4793 generic.go:334] "Generic (PLEG): container finished" podID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerID="6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c" exitCode=0 Jan 30 14:31:47 crc kubenswrapper[4793]: I0130 14:31:47.659548 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerDied","Data":"6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c"} Jan 30 14:31:48 crc kubenswrapper[4793]: I0130 14:31:48.671087 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerStarted","Data":"32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0"} Jan 30 14:31:51 crc kubenswrapper[4793]: I0130 14:31:51.703577 4793 generic.go:334] "Generic (PLEG): container finished" podID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerID="32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0" exitCode=0 Jan 30 14:31:51 crc kubenswrapper[4793]: I0130 14:31:51.703626 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerDied","Data":"32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0"} Jan 30 14:31:52 crc kubenswrapper[4793]: I0130 14:31:52.713682 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerStarted","Data":"b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d"} Jan 30 14:31:52 crc kubenswrapper[4793]: I0130 14:31:52.747964 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jtvvp" podStartSLOduration=3.283443355 podStartE2EDuration="7.747939521s" podCreationTimestamp="2026-01-30 14:31:45 +0000 UTC" firstStartedPulling="2026-01-30 14:31:47.661184803 +0000 UTC m=+2918.362533294" lastFinishedPulling="2026-01-30 14:31:52.125680969 +0000 UTC m=+2922.827029460" observedRunningTime="2026-01-30 14:31:52.7406187 +0000 UTC m=+2923.441967211" watchObservedRunningTime="2026-01-30 14:31:52.747939521 +0000 UTC m=+2923.449288012" Jan 30 14:31:55 crc kubenswrapper[4793]: I0130 14:31:55.807241 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:55 crc kubenswrapper[4793]: I0130 14:31:55.807678 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:55 crc kubenswrapper[4793]: I0130 14:31:55.862218 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:32:05 crc kubenswrapper[4793]: I0130 14:32:05.854818 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:32:05 crc kubenswrapper[4793]: I0130 14:32:05.906894 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jtvvp"] Jan 30 14:32:06 crc kubenswrapper[4793]: I0130 14:32:06.861654 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jtvvp" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="registry-server" containerID="cri-o://b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d" gracePeriod=2 Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.339783 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.416154 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-utilities\") pod \"741a3bc2-86fb-4c08-9403-71f9900d2685\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.416281 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-catalog-content\") pod \"741a3bc2-86fb-4c08-9403-71f9900d2685\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.416352 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6lq9\" (UniqueName: \"kubernetes.io/projected/741a3bc2-86fb-4c08-9403-71f9900d2685-kube-api-access-h6lq9\") pod \"741a3bc2-86fb-4c08-9403-71f9900d2685\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.417243 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-utilities" (OuterVolumeSpecName: "utilities") pod "741a3bc2-86fb-4c08-9403-71f9900d2685" (UID: "741a3bc2-86fb-4c08-9403-71f9900d2685"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.434235 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/741a3bc2-86fb-4c08-9403-71f9900d2685-kube-api-access-h6lq9" (OuterVolumeSpecName: "kube-api-access-h6lq9") pod "741a3bc2-86fb-4c08-9403-71f9900d2685" (UID: "741a3bc2-86fb-4c08-9403-71f9900d2685"). InnerVolumeSpecName "kube-api-access-h6lq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.476463 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "741a3bc2-86fb-4c08-9403-71f9900d2685" (UID: "741a3bc2-86fb-4c08-9403-71f9900d2685"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.518799 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.519083 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.519155 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6lq9\" (UniqueName: \"kubernetes.io/projected/741a3bc2-86fb-4c08-9403-71f9900d2685-kube-api-access-h6lq9\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.880586 4793 generic.go:334] "Generic (PLEG): container finished" podID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerID="b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d" exitCode=0 Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.880643 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerDied","Data":"b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d"} Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.882403 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerDied","Data":"8c884434f855c40a03540f0ffa1d304bd12ee3704e243d46631a685f83a6e054"} Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.880667 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.882795 4793 scope.go:117] "RemoveContainer" containerID="b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.931906 4793 scope.go:117] "RemoveContainer" containerID="32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.939805 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jtvvp"] Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.951433 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jtvvp"] Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.971439 4793 scope.go:117] "RemoveContainer" containerID="6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.018910 4793 scope.go:117] "RemoveContainer" containerID="b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d" Jan 30 14:32:08 crc kubenswrapper[4793]: E0130 14:32:08.019459 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d\": container with ID starting with b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d not found: ID does not exist" containerID="b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.019752 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d"} err="failed to get container status \"b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d\": rpc error: code = NotFound desc = could not find container \"b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d\": container with ID starting with b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d not found: ID does not exist" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.019779 4793 scope.go:117] "RemoveContainer" containerID="32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0" Jan 30 14:32:08 crc kubenswrapper[4793]: E0130 14:32:08.020138 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0\": container with ID starting with 32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0 not found: ID does not exist" containerID="32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.020174 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0"} err="failed to get container status \"32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0\": rpc error: code = NotFound desc = could not find container \"32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0\": container with ID starting with 32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0 not found: ID does not exist" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.020196 4793 scope.go:117] "RemoveContainer" containerID="6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c" Jan 30 14:32:08 crc kubenswrapper[4793]: E0130 14:32:08.020529 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c\": container with ID starting with 6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c not found: ID does not exist" containerID="6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.020552 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c"} err="failed to get container status \"6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c\": rpc error: code = NotFound desc = could not find container \"6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c\": container with ID starting with 6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c not found: ID does not exist" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.411616 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" path="/var/lib/kubelet/pods/741a3bc2-86fb-4c08-9403-71f9900d2685/volumes" Jan 30 14:32:12 crc kubenswrapper[4793]: I0130 14:32:12.414149 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:32:12 crc kubenswrapper[4793]: I0130 14:32:12.414656 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:32:19 crc kubenswrapper[4793]: I0130 14:32:19.999228 4793 generic.go:334] "Generic (PLEG): container finished" podID="dfc4d2ba-0414-4f1e-8733-a75d39218ef8" containerID="5e41fdf863829756b00ca7e86cc571728bb392f0583e10c4de618e692db88093" exitCode=0 Jan 30 14:32:19 crc kubenswrapper[4793]: I0130 14:32:19.999347 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" event={"ID":"dfc4d2ba-0414-4f1e-8733-a75d39218ef8","Type":"ContainerDied","Data":"5e41fdf863829756b00ca7e86cc571728bb392f0583e10c4de618e692db88093"} Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.459460 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629353 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-inventory\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629414 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4bsc\" (UniqueName: \"kubernetes.io/projected/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-kube-api-access-c4bsc\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629435 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-ssh-key-openstack-edpm-ipam\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629453 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-0\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629473 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-1\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629553 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-extra-config-0\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629577 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-0\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629599 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-1\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629620 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-combined-ca-bundle\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.638468 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-kube-api-access-c4bsc" (OuterVolumeSpecName: "kube-api-access-c4bsc") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "kube-api-access-c4bsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.638666 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.655578 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.660428 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.662925 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.668365 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.676424 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.690188 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.705737 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-inventory" (OuterVolumeSpecName: "inventory") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731123 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731159 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4bsc\" (UniqueName: \"kubernetes.io/projected/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-kube-api-access-c4bsc\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731174 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731187 4793 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731198 4793 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731207 4793 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731216 4793 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731224 4793 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731232 4793 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.016401 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" event={"ID":"dfc4d2ba-0414-4f1e-8733-a75d39218ef8","Type":"ContainerDied","Data":"35c08494f8afe2508d0796d2d7916a60b01429d9956705b3e7cc36e86561fae0"} Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.016452 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35c08494f8afe2508d0796d2d7916a60b01429d9956705b3e7cc36e86561fae0" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.016502 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.293985 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb"] Jan 30 14:32:22 crc kubenswrapper[4793]: E0130 14:32:22.294618 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="registry-server" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.294687 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="registry-server" Jan 30 14:32:22 crc kubenswrapper[4793]: E0130 14:32:22.294760 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="extract-content" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.294811 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="extract-content" Jan 30 14:32:22 crc kubenswrapper[4793]: E0130 14:32:22.294879 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="extract-utilities" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.294937 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="extract-utilities" Jan 30 14:32:22 crc kubenswrapper[4793]: E0130 14:32:22.294995 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc4d2ba-0414-4f1e-8733-a75d39218ef8" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.295079 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc4d2ba-0414-4f1e-8733-a75d39218ef8" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.295595 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="registry-server" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.295696 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc4d2ba-0414-4f1e-8733-a75d39218ef8" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.296483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.300257 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.300623 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.300783 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.300952 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.301208 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.316818 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb"] Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.342405 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7hc\" (UniqueName: \"kubernetes.io/projected/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-kube-api-access-hw7hc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.342565 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.342602 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.342717 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.342818 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.342941 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.343020 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.444796 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.445738 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.445793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.445898 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw7hc\" (UniqueName: \"kubernetes.io/projected/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-kube-api-access-hw7hc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.446009 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.446037 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.446140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.448934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.449403 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.451893 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.452772 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.453081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.453733 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.470909 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw7hc\" (UniqueName: \"kubernetes.io/projected/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-kube-api-access-hw7hc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.626068 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:23 crc kubenswrapper[4793]: I0130 14:32:23.141881 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb"] Jan 30 14:32:24 crc kubenswrapper[4793]: I0130 14:32:24.040783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" event={"ID":"8b1317e1-63f1-4b06-aa31-5df5459c6ce6","Type":"ContainerStarted","Data":"a64d90e6e708916bddb2fb85fc43ea11a1f35f9eae3151af244a63d85665315a"} Jan 30 14:32:24 crc kubenswrapper[4793]: I0130 14:32:24.041130 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" event={"ID":"8b1317e1-63f1-4b06-aa31-5df5459c6ce6","Type":"ContainerStarted","Data":"34ff75da3ef3d1a97297c8bba1b71ad20c81e8b1c9fef9fb1b215b54b7a4a0d3"} Jan 30 14:32:24 crc kubenswrapper[4793]: I0130 14:32:24.064490 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" podStartSLOduration=1.629312266 podStartE2EDuration="2.064472558s" podCreationTimestamp="2026-01-30 14:32:22 +0000 UTC" firstStartedPulling="2026-01-30 14:32:23.153714062 +0000 UTC m=+2953.855062553" lastFinishedPulling="2026-01-30 14:32:23.588874354 +0000 UTC m=+2954.290222845" observedRunningTime="2026-01-30 14:32:24.061356952 +0000 UTC m=+2954.762705473" watchObservedRunningTime="2026-01-30 14:32:24.064472558 +0000 UTC m=+2954.765821049" Jan 30 14:32:42 crc kubenswrapper[4793]: I0130 14:32:42.413296 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:32:42 crc kubenswrapper[4793]: I0130 14:32:42.413919 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:33:12 crc kubenswrapper[4793]: I0130 14:33:12.413240 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:33:12 crc kubenswrapper[4793]: I0130 14:33:12.413819 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:33:12 crc kubenswrapper[4793]: I0130 14:33:12.413864 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:33:12 crc kubenswrapper[4793]: I0130 14:33:12.414572 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:33:12 crc kubenswrapper[4793]: I0130 14:33:12.414629 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" gracePeriod=600 Jan 30 14:33:13 crc kubenswrapper[4793]: E0130 14:33:13.152953 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:33:13 crc kubenswrapper[4793]: I0130 14:33:13.486281 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" exitCode=0 Jan 30 14:33:13 crc kubenswrapper[4793]: I0130 14:33:13.486340 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff"} Jan 30 14:33:13 crc kubenswrapper[4793]: I0130 14:33:13.486386 4793 scope.go:117] "RemoveContainer" containerID="70fb244a70a270db2f48a61c7b2320a4725cc48ffb5d0786cb6f3e83b0333e57" Jan 30 14:33:13 crc kubenswrapper[4793]: I0130 14:33:13.487158 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:33:13 crc kubenswrapper[4793]: E0130 14:33:13.487429 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:33:28 crc kubenswrapper[4793]: I0130 14:33:28.399386 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:33:28 crc kubenswrapper[4793]: E0130 14:33:28.400145 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:33:40 crc kubenswrapper[4793]: I0130 14:33:40.405404 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:33:40 crc kubenswrapper[4793]: E0130 14:33:40.406138 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:33:54 crc kubenswrapper[4793]: I0130 14:33:54.398080 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:33:54 crc kubenswrapper[4793]: E0130 14:33:54.398792 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:34:09 crc kubenswrapper[4793]: I0130 14:34:09.398383 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:34:09 crc kubenswrapper[4793]: E0130 14:34:09.399190 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:34:23 crc kubenswrapper[4793]: I0130 14:34:23.398396 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:34:23 crc kubenswrapper[4793]: E0130 14:34:23.399114 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:34:38 crc kubenswrapper[4793]: I0130 14:34:38.398860 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:34:38 crc kubenswrapper[4793]: E0130 14:34:38.399649 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:34:49 crc kubenswrapper[4793]: I0130 14:34:49.398554 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:34:49 crc kubenswrapper[4793]: E0130 14:34:49.400693 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:35:02 crc kubenswrapper[4793]: I0130 14:35:02.399238 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:35:02 crc kubenswrapper[4793]: E0130 14:35:02.400319 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:35:13 crc kubenswrapper[4793]: I0130 14:35:13.399308 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:35:13 crc kubenswrapper[4793]: E0130 14:35:13.400503 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:35:28 crc kubenswrapper[4793]: I0130 14:35:28.398810 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:35:28 crc kubenswrapper[4793]: E0130 14:35:28.399608 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:35:40 crc kubenswrapper[4793]: I0130 14:35:40.774515 4793 generic.go:334] "Generic (PLEG): container finished" podID="8b1317e1-63f1-4b06-aa31-5df5459c6ce6" containerID="a64d90e6e708916bddb2fb85fc43ea11a1f35f9eae3151af244a63d85665315a" exitCode=0 Jan 30 14:35:40 crc kubenswrapper[4793]: I0130 14:35:40.774641 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" event={"ID":"8b1317e1-63f1-4b06-aa31-5df5459c6ce6","Type":"ContainerDied","Data":"a64d90e6e708916bddb2fb85fc43ea11a1f35f9eae3151af244a63d85665315a"} Jan 30 14:35:41 crc kubenswrapper[4793]: I0130 14:35:41.399005 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:35:41 crc kubenswrapper[4793]: E0130 14:35:41.399246 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.205658 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.321849 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-inventory\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.321933 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-0\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.321989 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw7hc\" (UniqueName: \"kubernetes.io/projected/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-kube-api-access-hw7hc\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.322064 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-telemetry-combined-ca-bundle\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.322166 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-1\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.322193 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-2\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.322265 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ssh-key-openstack-edpm-ipam\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.327891 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.339642 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-kube-api-access-hw7hc" (OuterVolumeSpecName: "kube-api-access-hw7hc") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "kube-api-access-hw7hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.356133 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.356366 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.356665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.359965 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.375814 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-inventory" (OuterVolumeSpecName: "inventory") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429340 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429378 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429421 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw7hc\" (UniqueName: \"kubernetes.io/projected/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-kube-api-access-hw7hc\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429437 4793 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429449 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429464 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429612 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.798815 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" event={"ID":"8b1317e1-63f1-4b06-aa31-5df5459c6ce6","Type":"ContainerDied","Data":"34ff75da3ef3d1a97297c8bba1b71ad20c81e8b1c9fef9fb1b215b54b7a4a0d3"} Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.798866 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34ff75da3ef3d1a97297c8bba1b71ad20c81e8b1c9fef9fb1b215b54b7a4a0d3" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.798874 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:35:52 crc kubenswrapper[4793]: I0130 14:35:52.398245 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:35:52 crc kubenswrapper[4793]: E0130 14:35:52.399141 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:36:04 crc kubenswrapper[4793]: I0130 14:36:04.398680 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:36:04 crc kubenswrapper[4793]: E0130 14:36:04.400332 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:36:19 crc kubenswrapper[4793]: I0130 14:36:19.398214 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:36:19 crc kubenswrapper[4793]: E0130 14:36:19.398919 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:36:34 crc kubenswrapper[4793]: I0130 14:36:34.398266 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:36:34 crc kubenswrapper[4793]: E0130 14:36:34.399211 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.179080 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 14:36:49 crc kubenswrapper[4793]: E0130 14:36:49.180172 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1317e1-63f1-4b06-aa31-5df5459c6ce6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.180194 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1317e1-63f1-4b06-aa31-5df5459c6ce6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.180405 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1317e1-63f1-4b06-aa31-5df5459c6ce6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.181185 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.183937 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.183994 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.184188 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.184627 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9sb9w" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.208548 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323724 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-579bt\" (UniqueName: \"kubernetes.io/projected/4bf53e2d-d024-4526-ada2-0ee6b461babb-kube-api-access-579bt\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323791 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323819 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323841 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323864 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-config-data\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323888 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323925 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323990 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.324158 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.398907 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:36:49 crc kubenswrapper[4793]: E0130 14:36:49.399236 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426296 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426350 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-579bt\" (UniqueName: \"kubernetes.io/projected/4bf53e2d-d024-4526-ada2-0ee6b461babb-kube-api-access-579bt\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426372 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426411 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426433 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-config-data\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426452 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426505 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426819 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.427673 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.427744 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-config-data\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.427806 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.432545 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.433392 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.442283 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-579bt\" (UniqueName: \"kubernetes.io/projected/4bf53e2d-d024-4526-ada2-0ee6b461babb-kube-api-access-579bt\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.447797 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.455128 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.501199 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 14:36:50 crc kubenswrapper[4793]: I0130 14:36:50.003004 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 14:36:50 crc kubenswrapper[4793]: I0130 14:36:50.003857 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:36:50 crc kubenswrapper[4793]: I0130 14:36:50.396366 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4bf53e2d-d024-4526-ada2-0ee6b461babb","Type":"ContainerStarted","Data":"55c6a2b8062403d0e3d82dc5615fa6326ff29a1fce4fe5257e5d197c6f2071cb"} Jan 30 14:37:04 crc kubenswrapper[4793]: I0130 14:37:04.402786 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:37:04 crc kubenswrapper[4793]: E0130 14:37:04.477146 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:37:19 crc kubenswrapper[4793]: I0130 14:37:19.076103 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" podUID="e88efb4a-1489-4847-adb4-230a8b5db6ef" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.78:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:37:19 crc kubenswrapper[4793]: I0130 14:37:19.973339 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:37:19 crc kubenswrapper[4793]: E0130 14:37:19.997765 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:37:32 crc kubenswrapper[4793]: I0130 14:37:32.398162 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:37:32 crc kubenswrapper[4793]: E0130 14:37:32.398920 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:37:41 crc kubenswrapper[4793]: E0130 14:37:41.112676 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 30 14:37:41 crc kubenswrapper[4793]: E0130 14:37:41.113390 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-579bt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(4bf53e2d-d024-4526-ada2-0ee6b461babb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:37:41 crc kubenswrapper[4793]: E0130 14:37:41.115415 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="4bf53e2d-d024-4526-ada2-0ee6b461babb" Jan 30 14:37:41 crc kubenswrapper[4793]: E0130 14:37:41.198381 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="4bf53e2d-d024-4526-ada2-0ee6b461babb" Jan 30 14:37:43 crc kubenswrapper[4793]: I0130 14:37:43.399171 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:37:43 crc kubenswrapper[4793]: E0130 14:37:43.399834 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:37:55 crc kubenswrapper[4793]: I0130 14:37:55.251541 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 14:37:58 crc kubenswrapper[4793]: I0130 14:37:58.414076 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:37:58 crc kubenswrapper[4793]: E0130 14:37:58.415104 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:37:58 crc kubenswrapper[4793]: I0130 14:37:58.491971 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4bf53e2d-d024-4526-ada2-0ee6b461babb","Type":"ContainerStarted","Data":"d89fe0491771c7c6f955e91e1925c9e0d02dd442783163c9438dbd9b02ce47d9"} Jan 30 14:37:58 crc kubenswrapper[4793]: I0130 14:37:58.533902 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.289722724 podStartE2EDuration="1m10.533889233s" podCreationTimestamp="2026-01-30 14:36:48 +0000 UTC" firstStartedPulling="2026-01-30 14:36:50.003607353 +0000 UTC m=+3220.704955854" lastFinishedPulling="2026-01-30 14:37:55.247773872 +0000 UTC m=+3285.949122363" observedRunningTime="2026-01-30 14:37:58.533027362 +0000 UTC m=+3289.234375873" watchObservedRunningTime="2026-01-30 14:37:58.533889233 +0000 UTC m=+3289.235237724" Jan 30 14:38:13 crc kubenswrapper[4793]: I0130 14:38:13.399263 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:38:14 crc kubenswrapper[4793]: I0130 14:38:14.644465 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"3b40ff1ad28b890993e7464fb184af4aaf6269d300ea0eb233400b2a844450cc"} Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.700260 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8zg8s"] Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.703488 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.778950 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8zg8s"] Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.783967 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-catalog-content\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.784112 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-utilities\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.784236 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lhrk\" (UniqueName: \"kubernetes.io/projected/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-kube-api-access-7lhrk\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.885677 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-utilities\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.885834 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lhrk\" (UniqueName: \"kubernetes.io/projected/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-kube-api-access-7lhrk\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.885974 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-catalog-content\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.906903 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-utilities\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.906952 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-catalog-content\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.923884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lhrk\" (UniqueName: \"kubernetes.io/projected/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-kube-api-access-7lhrk\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:32 crc kubenswrapper[4793]: I0130 14:38:32.022586 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:33 crc kubenswrapper[4793]: I0130 14:38:33.253291 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8zg8s"] Jan 30 14:38:33 crc kubenswrapper[4793]: I0130 14:38:33.971984 4793 generic.go:334] "Generic (PLEG): container finished" podID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerID="b03772cc1fe623304aa850d2ae3e7a880985ec5280b330df6c3f217d693baf92" exitCode=0 Jan 30 14:38:33 crc kubenswrapper[4793]: I0130 14:38:33.972180 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerDied","Data":"b03772cc1fe623304aa850d2ae3e7a880985ec5280b330df6c3f217d693baf92"} Jan 30 14:38:33 crc kubenswrapper[4793]: I0130 14:38:33.972339 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerStarted","Data":"e316ead69d15b12075fc9f1b6e2697a44e33133531f74ce11960699c1bb8a38d"} Jan 30 14:38:35 crc kubenswrapper[4793]: I0130 14:38:35.988649 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerStarted","Data":"2acaac3fee7d377a8aa22b9ec1b7e360c30b74520e70444e839063c6ac86c617"} Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.445364 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cwwtp"] Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.447673 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.466915 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwwtp"] Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.571783 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-utilities\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.572104 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghnn4\" (UniqueName: \"kubernetes.io/projected/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-kube-api-access-ghnn4\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.572250 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-catalog-content\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.674365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-utilities\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.674614 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghnn4\" (UniqueName: \"kubernetes.io/projected/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-kube-api-access-ghnn4\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.674642 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-catalog-content\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.674878 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-utilities\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.675116 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-catalog-content\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.717309 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghnn4\" (UniqueName: \"kubernetes.io/projected/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-kube-api-access-ghnn4\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.809029 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:41 crc kubenswrapper[4793]: I0130 14:38:41.148374 4793 generic.go:334] "Generic (PLEG): container finished" podID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerID="2acaac3fee7d377a8aa22b9ec1b7e360c30b74520e70444e839063c6ac86c617" exitCode=0 Jan 30 14:38:41 crc kubenswrapper[4793]: I0130 14:38:41.148414 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerDied","Data":"2acaac3fee7d377a8aa22b9ec1b7e360c30b74520e70444e839063c6ac86c617"} Jan 30 14:38:43 crc kubenswrapper[4793]: I0130 14:38:43.046837 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwwtp"] Jan 30 14:38:43 crc kubenswrapper[4793]: W0130 14:38:43.077269 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabb9b1ca_f2f2_4d59_91d8_f6c5b0ce4615.slice/crio-41afcbc731f9ad086daffdad7b5355d636cf0021a0552a0c1fbc3b5f3f242e45 WatchSource:0}: Error finding container 41afcbc731f9ad086daffdad7b5355d636cf0021a0552a0c1fbc3b5f3f242e45: Status 404 returned error can't find the container with id 41afcbc731f9ad086daffdad7b5355d636cf0021a0552a0c1fbc3b5f3f242e45 Jan 30 14:38:43 crc kubenswrapper[4793]: I0130 14:38:43.164183 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerStarted","Data":"41afcbc731f9ad086daffdad7b5355d636cf0021a0552a0c1fbc3b5f3f242e45"} Jan 30 14:38:43 crc kubenswrapper[4793]: I0130 14:38:43.168833 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerStarted","Data":"73896ac0ada401c9f8dc61d946fc97d1cee80216dbe5f2029090a2926d4eddea"} Jan 30 14:38:43 crc kubenswrapper[4793]: I0130 14:38:43.188090 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8zg8s" podStartSLOduration=3.475555224 podStartE2EDuration="12.188068097s" podCreationTimestamp="2026-01-30 14:38:31 +0000 UTC" firstStartedPulling="2026-01-30 14:38:33.97371159 +0000 UTC m=+3324.675060081" lastFinishedPulling="2026-01-30 14:38:42.686224463 +0000 UTC m=+3333.387572954" observedRunningTime="2026-01-30 14:38:43.18285622 +0000 UTC m=+3333.884204721" watchObservedRunningTime="2026-01-30 14:38:43.188068097 +0000 UTC m=+3333.889416598" Jan 30 14:38:44 crc kubenswrapper[4793]: I0130 14:38:44.181072 4793 generic.go:334] "Generic (PLEG): container finished" podID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerID="358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4" exitCode=0 Jan 30 14:38:44 crc kubenswrapper[4793]: I0130 14:38:44.181160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerDied","Data":"358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4"} Jan 30 14:38:46 crc kubenswrapper[4793]: I0130 14:38:46.198383 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerStarted","Data":"8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d"} Jan 30 14:38:52 crc kubenswrapper[4793]: I0130 14:38:52.024149 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:52 crc kubenswrapper[4793]: I0130 14:38:52.024905 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:52 crc kubenswrapper[4793]: I0130 14:38:52.116513 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:52 crc kubenswrapper[4793]: I0130 14:38:52.278649 4793 generic.go:334] "Generic (PLEG): container finished" podID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerID="8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d" exitCode=0 Jan 30 14:38:52 crc kubenswrapper[4793]: I0130 14:38:52.279681 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerDied","Data":"8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d"} Jan 30 14:38:52 crc kubenswrapper[4793]: I0130 14:38:52.425586 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:53 crc kubenswrapper[4793]: I0130 14:38:53.371140 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8zg8s"] Jan 30 14:38:54 crc kubenswrapper[4793]: I0130 14:38:54.297270 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8zg8s" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="registry-server" containerID="cri-o://73896ac0ada401c9f8dc61d946fc97d1cee80216dbe5f2029090a2926d4eddea" gracePeriod=2 Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.306930 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerStarted","Data":"15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d"} Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.309364 4793 generic.go:334] "Generic (PLEG): container finished" podID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerID="73896ac0ada401c9f8dc61d946fc97d1cee80216dbe5f2029090a2926d4eddea" exitCode=0 Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.309405 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerDied","Data":"73896ac0ada401c9f8dc61d946fc97d1cee80216dbe5f2029090a2926d4eddea"} Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.340862 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cwwtp" podStartSLOduration=5.299429996 podStartE2EDuration="15.340843109s" podCreationTimestamp="2026-01-30 14:38:40 +0000 UTC" firstStartedPulling="2026-01-30 14:38:44.183993528 +0000 UTC m=+3334.885342019" lastFinishedPulling="2026-01-30 14:38:54.225406641 +0000 UTC m=+3344.926755132" observedRunningTime="2026-01-30 14:38:55.329336306 +0000 UTC m=+3346.030684817" watchObservedRunningTime="2026-01-30 14:38:55.340843109 +0000 UTC m=+3346.042191600" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.704350 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.838766 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lhrk\" (UniqueName: \"kubernetes.io/projected/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-kube-api-access-7lhrk\") pod \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.839144 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-catalog-content\") pod \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.839266 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-utilities\") pod \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.839957 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-utilities" (OuterVolumeSpecName: "utilities") pod "262ecbe3-59ce-4b01-988f-fdffe2abbeaf" (UID: "262ecbe3-59ce-4b01-988f-fdffe2abbeaf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.853290 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-kube-api-access-7lhrk" (OuterVolumeSpecName: "kube-api-access-7lhrk") pod "262ecbe3-59ce-4b01-988f-fdffe2abbeaf" (UID: "262ecbe3-59ce-4b01-988f-fdffe2abbeaf"). InnerVolumeSpecName "kube-api-access-7lhrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.889789 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "262ecbe3-59ce-4b01-988f-fdffe2abbeaf" (UID: "262ecbe3-59ce-4b01-988f-fdffe2abbeaf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.941507 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.941544 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.941554 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lhrk\" (UniqueName: \"kubernetes.io/projected/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-kube-api-access-7lhrk\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.341359 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerDied","Data":"e316ead69d15b12075fc9f1b6e2697a44e33133531f74ce11960699c1bb8a38d"} Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.341438 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.341465 4793 scope.go:117] "RemoveContainer" containerID="73896ac0ada401c9f8dc61d946fc97d1cee80216dbe5f2029090a2926d4eddea" Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.375018 4793 scope.go:117] "RemoveContainer" containerID="2acaac3fee7d377a8aa22b9ec1b7e360c30b74520e70444e839063c6ac86c617" Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.403291 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8zg8s"] Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.413022 4793 scope.go:117] "RemoveContainer" containerID="b03772cc1fe623304aa850d2ae3e7a880985ec5280b330df6c3f217d693baf92" Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.431244 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8zg8s"] Jan 30 14:38:58 crc kubenswrapper[4793]: I0130 14:38:58.409169 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" path="/var/lib/kubelet/pods/262ecbe3-59ce-4b01-988f-fdffe2abbeaf/volumes" Jan 30 14:39:00 crc kubenswrapper[4793]: I0130 14:39:00.809920 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:39:00 crc kubenswrapper[4793]: I0130 14:39:00.810205 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:39:01 crc kubenswrapper[4793]: I0130 14:39:01.859979 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cwwtp" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="registry-server" probeResult="failure" output=< Jan 30 14:39:01 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:39:01 crc kubenswrapper[4793]: > Jan 30 14:39:10 crc kubenswrapper[4793]: I0130 14:39:10.868027 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:39:10 crc kubenswrapper[4793]: I0130 14:39:10.919623 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:39:11 crc kubenswrapper[4793]: I0130 14:39:11.654357 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwwtp"] Jan 30 14:39:12 crc kubenswrapper[4793]: I0130 14:39:12.531663 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cwwtp" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="registry-server" containerID="cri-o://15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d" gracePeriod=2 Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.188254 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.317785 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghnn4\" (UniqueName: \"kubernetes.io/projected/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-kube-api-access-ghnn4\") pod \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.317962 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-catalog-content\") pod \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.318064 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-utilities\") pod \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.319427 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-utilities" (OuterVolumeSpecName: "utilities") pod "abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" (UID: "abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.329974 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-kube-api-access-ghnn4" (OuterVolumeSpecName: "kube-api-access-ghnn4") pod "abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" (UID: "abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615"). InnerVolumeSpecName "kube-api-access-ghnn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.349170 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" (UID: "abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.420082 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.420118 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.420129 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghnn4\" (UniqueName: \"kubernetes.io/projected/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-kube-api-access-ghnn4\") on node \"crc\" DevicePath \"\"" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.542840 4793 generic.go:334] "Generic (PLEG): container finished" podID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerID="15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d" exitCode=0 Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.542901 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerDied","Data":"15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d"} Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.542929 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.542986 4793 scope.go:117] "RemoveContainer" containerID="15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.542970 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerDied","Data":"41afcbc731f9ad086daffdad7b5355d636cf0021a0552a0c1fbc3b5f3f242e45"} Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.569630 4793 scope.go:117] "RemoveContainer" containerID="8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.614404 4793 scope.go:117] "RemoveContainer" containerID="358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.637243 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwwtp"] Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.649919 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwwtp"] Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.654559 4793 scope.go:117] "RemoveContainer" containerID="15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d" Jan 30 14:39:13 crc kubenswrapper[4793]: E0130 14:39:13.655007 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d\": container with ID starting with 15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d not found: ID does not exist" containerID="15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.655070 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d"} err="failed to get container status \"15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d\": rpc error: code = NotFound desc = could not find container \"15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d\": container with ID starting with 15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d not found: ID does not exist" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.655091 4793 scope.go:117] "RemoveContainer" containerID="8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d" Jan 30 14:39:13 crc kubenswrapper[4793]: E0130 14:39:13.655379 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d\": container with ID starting with 8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d not found: ID does not exist" containerID="8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.655416 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d"} err="failed to get container status \"8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d\": rpc error: code = NotFound desc = could not find container \"8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d\": container with ID starting with 8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d not found: ID does not exist" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.655436 4793 scope.go:117] "RemoveContainer" containerID="358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4" Jan 30 14:39:13 crc kubenswrapper[4793]: E0130 14:39:13.655697 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4\": container with ID starting with 358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4 not found: ID does not exist" containerID="358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.655725 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4"} err="failed to get container status \"358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4\": rpc error: code = NotFound desc = could not find container \"358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4\": container with ID starting with 358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4 not found: ID does not exist" Jan 30 14:39:14 crc kubenswrapper[4793]: I0130 14:39:14.411824 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" path="/var/lib/kubelet/pods/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615/volumes" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.435119 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d22cv"] Jan 30 14:39:38 crc kubenswrapper[4793]: E0130 14:39:38.435909 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="extract-content" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.435920 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="extract-content" Jan 30 14:39:38 crc kubenswrapper[4793]: E0130 14:39:38.435937 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="registry-server" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.435944 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="registry-server" Jan 30 14:39:38 crc kubenswrapper[4793]: E0130 14:39:38.435956 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="registry-server" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.435962 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="registry-server" Jan 30 14:39:38 crc kubenswrapper[4793]: E0130 14:39:38.435980 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="extract-content" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.435986 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="extract-content" Jan 30 14:39:38 crc kubenswrapper[4793]: E0130 14:39:38.436008 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="extract-utilities" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.436013 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="extract-utilities" Jan 30 14:39:38 crc kubenswrapper[4793]: E0130 14:39:38.436024 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="extract-utilities" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.436029 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="extract-utilities" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.436197 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="registry-server" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.436217 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="registry-server" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.437479 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.459108 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d22cv"] Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.520090 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72cx2\" (UniqueName: \"kubernetes.io/projected/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-kube-api-access-72cx2\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.520167 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-utilities\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.520244 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-catalog-content\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.622359 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72cx2\" (UniqueName: \"kubernetes.io/projected/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-kube-api-access-72cx2\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.622437 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-utilities\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.622516 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-catalog-content\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.623108 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-catalog-content\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.623110 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-utilities\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.643482 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72cx2\" (UniqueName: \"kubernetes.io/projected/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-kube-api-access-72cx2\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.759171 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:39 crc kubenswrapper[4793]: I0130 14:39:39.250658 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d22cv"] Jan 30 14:39:39 crc kubenswrapper[4793]: I0130 14:39:39.778135 4793 generic.go:334] "Generic (PLEG): container finished" podID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerID="45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20" exitCode=0 Jan 30 14:39:39 crc kubenswrapper[4793]: I0130 14:39:39.778270 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerDied","Data":"45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20"} Jan 30 14:39:39 crc kubenswrapper[4793]: I0130 14:39:39.778441 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerStarted","Data":"37e9623922456531cfc7cc936a8aa3fa6f702e72bc6a0a5f3f985a532c534c40"} Jan 30 14:39:40 crc kubenswrapper[4793]: I0130 14:39:40.786610 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerStarted","Data":"30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29"} Jan 30 14:39:49 crc kubenswrapper[4793]: I0130 14:39:49.871536 4793 generic.go:334] "Generic (PLEG): container finished" podID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerID="30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29" exitCode=0 Jan 30 14:39:49 crc kubenswrapper[4793]: I0130 14:39:49.871602 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerDied","Data":"30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29"} Jan 30 14:39:50 crc kubenswrapper[4793]: I0130 14:39:50.914277 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerStarted","Data":"b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f"} Jan 30 14:39:50 crc kubenswrapper[4793]: I0130 14:39:50.939717 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d22cv" podStartSLOduration=2.399260297 podStartE2EDuration="12.939666178s" podCreationTimestamp="2026-01-30 14:39:38 +0000 UTC" firstStartedPulling="2026-01-30 14:39:39.780267273 +0000 UTC m=+3390.481615764" lastFinishedPulling="2026-01-30 14:39:50.320673154 +0000 UTC m=+3401.022021645" observedRunningTime="2026-01-30 14:39:50.934805448 +0000 UTC m=+3401.636153949" watchObservedRunningTime="2026-01-30 14:39:50.939666178 +0000 UTC m=+3401.641014669" Jan 30 14:39:58 crc kubenswrapper[4793]: I0130 14:39:58.759502 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:58 crc kubenswrapper[4793]: I0130 14:39:58.760087 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:59 crc kubenswrapper[4793]: I0130 14:39:59.807776 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d22cv" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="registry-server" probeResult="failure" output=< Jan 30 14:39:59 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:39:59 crc kubenswrapper[4793]: > Jan 30 14:40:08 crc kubenswrapper[4793]: I0130 14:40:08.808390 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:40:08 crc kubenswrapper[4793]: I0130 14:40:08.861071 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:40:09 crc kubenswrapper[4793]: I0130 14:40:09.637739 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d22cv"] Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.076800 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d22cv" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="registry-server" containerID="cri-o://b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f" gracePeriod=2 Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.828037 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.866983 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-catalog-content\") pod \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.867251 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-utilities\") pod \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.867307 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72cx2\" (UniqueName: \"kubernetes.io/projected/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-kube-api-access-72cx2\") pod \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.875869 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-kube-api-access-72cx2" (OuterVolumeSpecName: "kube-api-access-72cx2") pod "c91d9b4c-8c51-4d39-883a-e0911bde0ad9" (UID: "c91d9b4c-8c51-4d39-883a-e0911bde0ad9"). InnerVolumeSpecName "kube-api-access-72cx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.876466 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-utilities" (OuterVolumeSpecName: "utilities") pod "c91d9b4c-8c51-4d39-883a-e0911bde0ad9" (UID: "c91d9b4c-8c51-4d39-883a-e0911bde0ad9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.969684 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.969733 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72cx2\" (UniqueName: \"kubernetes.io/projected/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-kube-api-access-72cx2\") on node \"crc\" DevicePath \"\"" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.040456 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c91d9b4c-8c51-4d39-883a-e0911bde0ad9" (UID: "c91d9b4c-8c51-4d39-883a-e0911bde0ad9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.071434 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.087992 4793 generic.go:334] "Generic (PLEG): container finished" podID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerID="b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f" exitCode=0 Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.088035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerDied","Data":"b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f"} Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.089743 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerDied","Data":"37e9623922456531cfc7cc936a8aa3fa6f702e72bc6a0a5f3f985a532c534c40"} Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.088117 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.089788 4793 scope.go:117] "RemoveContainer" containerID="b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.114915 4793 scope.go:117] "RemoveContainer" containerID="30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.138881 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d22cv"] Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.147193 4793 scope.go:117] "RemoveContainer" containerID="45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.147969 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d22cv"] Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.183512 4793 scope.go:117] "RemoveContainer" containerID="b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f" Jan 30 14:40:11 crc kubenswrapper[4793]: E0130 14:40:11.184032 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f\": container with ID starting with b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f not found: ID does not exist" containerID="b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.184118 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f"} err="failed to get container status \"b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f\": rpc error: code = NotFound desc = could not find container \"b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f\": container with ID starting with b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f not found: ID does not exist" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.184154 4793 scope.go:117] "RemoveContainer" containerID="30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29" Jan 30 14:40:11 crc kubenswrapper[4793]: E0130 14:40:11.184576 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29\": container with ID starting with 30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29 not found: ID does not exist" containerID="30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.184602 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29"} err="failed to get container status \"30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29\": rpc error: code = NotFound desc = could not find container \"30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29\": container with ID starting with 30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29 not found: ID does not exist" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.184616 4793 scope.go:117] "RemoveContainer" containerID="45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20" Jan 30 14:40:11 crc kubenswrapper[4793]: E0130 14:40:11.184858 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20\": container with ID starting with 45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20 not found: ID does not exist" containerID="45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.184878 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20"} err="failed to get container status \"45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20\": rpc error: code = NotFound desc = could not find container \"45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20\": container with ID starting with 45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20 not found: ID does not exist" Jan 30 14:40:12 crc kubenswrapper[4793]: I0130 14:40:12.411425 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" path="/var/lib/kubelet/pods/c91d9b4c-8c51-4d39-883a-e0911bde0ad9/volumes" Jan 30 14:40:42 crc kubenswrapper[4793]: I0130 14:40:42.413406 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:40:42 crc kubenswrapper[4793]: I0130 14:40:42.414007 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:41:12 crc kubenswrapper[4793]: I0130 14:41:12.413281 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:41:12 crc kubenswrapper[4793]: I0130 14:41:12.413758 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.421814 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.422469 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.422753 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.423504 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b40ff1ad28b890993e7464fb184af4aaf6269d300ea0eb233400b2a844450cc"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.423561 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://3b40ff1ad28b890993e7464fb184af4aaf6269d300ea0eb233400b2a844450cc" gracePeriod=600 Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.921490 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="3b40ff1ad28b890993e7464fb184af4aaf6269d300ea0eb233400b2a844450cc" exitCode=0 Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.921964 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"3b40ff1ad28b890993e7464fb184af4aaf6269d300ea0eb233400b2a844450cc"} Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.922077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716"} Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.922158 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.251535 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kwkmg"] Jan 30 14:41:50 crc kubenswrapper[4793]: E0130 14:41:50.252463 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="extract-utilities" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.252479 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="extract-utilities" Jan 30 14:41:50 crc kubenswrapper[4793]: E0130 14:41:50.252521 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="extract-content" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.252530 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="extract-content" Jan 30 14:41:50 crc kubenswrapper[4793]: E0130 14:41:50.252561 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="registry-server" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.252569 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="registry-server" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.252805 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="registry-server" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.254308 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.283100 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwkmg"] Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.331547 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-catalog-content\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.331633 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-utilities\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.331728 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pltqx\" (UniqueName: \"kubernetes.io/projected/eaf6755c-f96b-44cd-a05b-10f4420c18b8-kube-api-access-pltqx\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.433461 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-utilities\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.433798 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pltqx\" (UniqueName: \"kubernetes.io/projected/eaf6755c-f96b-44cd-a05b-10f4420c18b8-kube-api-access-pltqx\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.434241 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-catalog-content\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.434613 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-utilities\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.434648 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-catalog-content\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.473785 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pltqx\" (UniqueName: \"kubernetes.io/projected/eaf6755c-f96b-44cd-a05b-10f4420c18b8-kube-api-access-pltqx\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.587715 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:51 crc kubenswrapper[4793]: I0130 14:41:51.194447 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwkmg"] Jan 30 14:41:52 crc kubenswrapper[4793]: I0130 14:41:52.007718 4793 generic.go:334] "Generic (PLEG): container finished" podID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerID="02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac" exitCode=0 Jan 30 14:41:52 crc kubenswrapper[4793]: I0130 14:41:52.008023 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerDied","Data":"02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac"} Jan 30 14:41:52 crc kubenswrapper[4793]: I0130 14:41:52.008058 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerStarted","Data":"c153764a949f50d21d71def364eb8bcb1b9bbda31f3f770f7a6cbb2167fdd2b3"} Jan 30 14:41:52 crc kubenswrapper[4793]: I0130 14:41:52.014749 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:41:54 crc kubenswrapper[4793]: I0130 14:41:54.026158 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerStarted","Data":"aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251"} Jan 30 14:42:01 crc kubenswrapper[4793]: I0130 14:42:01.092950 4793 generic.go:334] "Generic (PLEG): container finished" podID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerID="aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251" exitCode=0 Jan 30 14:42:01 crc kubenswrapper[4793]: I0130 14:42:01.093035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerDied","Data":"aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251"} Jan 30 14:42:07 crc kubenswrapper[4793]: I0130 14:42:07.145494 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerStarted","Data":"cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3"} Jan 30 14:42:07 crc kubenswrapper[4793]: I0130 14:42:07.168729 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kwkmg" podStartSLOduration=2.390315969 podStartE2EDuration="17.1687071s" podCreationTimestamp="2026-01-30 14:41:50 +0000 UTC" firstStartedPulling="2026-01-30 14:41:52.014267612 +0000 UTC m=+3522.715616103" lastFinishedPulling="2026-01-30 14:42:06.792658743 +0000 UTC m=+3537.494007234" observedRunningTime="2026-01-30 14:42:07.163164804 +0000 UTC m=+3537.864513305" watchObservedRunningTime="2026-01-30 14:42:07.1687071 +0000 UTC m=+3537.870055591" Jan 30 14:42:10 crc kubenswrapper[4793]: I0130 14:42:10.587903 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:42:10 crc kubenswrapper[4793]: I0130 14:42:10.589115 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:42:11 crc kubenswrapper[4793]: I0130 14:42:11.642473 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kwkmg" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" probeResult="failure" output=< Jan 30 14:42:11 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:42:11 crc kubenswrapper[4793]: > Jan 30 14:42:21 crc kubenswrapper[4793]: I0130 14:42:21.636370 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kwkmg" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" probeResult="failure" output=< Jan 30 14:42:21 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:42:21 crc kubenswrapper[4793]: > Jan 30 14:42:31 crc kubenswrapper[4793]: I0130 14:42:31.634947 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kwkmg" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" probeResult="failure" output=< Jan 30 14:42:31 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:42:31 crc kubenswrapper[4793]: > Jan 30 14:42:40 crc kubenswrapper[4793]: I0130 14:42:40.633646 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:42:40 crc kubenswrapper[4793]: I0130 14:42:40.686470 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:42:40 crc kubenswrapper[4793]: I0130 14:42:40.873396 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwkmg"] Jan 30 14:42:42 crc kubenswrapper[4793]: I0130 14:42:42.457422 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kwkmg" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" containerID="cri-o://cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3" gracePeriod=2 Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.133509 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.291645 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-utilities\") pod \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.291885 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pltqx\" (UniqueName: \"kubernetes.io/projected/eaf6755c-f96b-44cd-a05b-10f4420c18b8-kube-api-access-pltqx\") pod \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.291975 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-catalog-content\") pod \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.292598 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-utilities" (OuterVolumeSpecName: "utilities") pod "eaf6755c-f96b-44cd-a05b-10f4420c18b8" (UID: "eaf6755c-f96b-44cd-a05b-10f4420c18b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.298126 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.298300 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf6755c-f96b-44cd-a05b-10f4420c18b8-kube-api-access-pltqx" (OuterVolumeSpecName: "kube-api-access-pltqx") pod "eaf6755c-f96b-44cd-a05b-10f4420c18b8" (UID: "eaf6755c-f96b-44cd-a05b-10f4420c18b8"). InnerVolumeSpecName "kube-api-access-pltqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.346715 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eaf6755c-f96b-44cd-a05b-10f4420c18b8" (UID: "eaf6755c-f96b-44cd-a05b-10f4420c18b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.399676 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pltqx\" (UniqueName: \"kubernetes.io/projected/eaf6755c-f96b-44cd-a05b-10f4420c18b8-kube-api-access-pltqx\") on node \"crc\" DevicePath \"\"" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.399890 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.468933 4793 generic.go:334] "Generic (PLEG): container finished" podID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerID="cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3" exitCode=0 Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.468973 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerDied","Data":"cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3"} Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.469005 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerDied","Data":"c153764a949f50d21d71def364eb8bcb1b9bbda31f3f770f7a6cbb2167fdd2b3"} Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.469027 4793 scope.go:117] "RemoveContainer" containerID="cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.469211 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.507379 4793 scope.go:117] "RemoveContainer" containerID="aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.536624 4793 scope.go:117] "RemoveContainer" containerID="02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.537770 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwkmg"] Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.551669 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kwkmg"] Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.581715 4793 scope.go:117] "RemoveContainer" containerID="cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3" Jan 30 14:42:43 crc kubenswrapper[4793]: E0130 14:42:43.582408 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3\": container with ID starting with cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3 not found: ID does not exist" containerID="cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.582465 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3"} err="failed to get container status \"cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3\": rpc error: code = NotFound desc = could not find container \"cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3\": container with ID starting with cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3 not found: ID does not exist" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.582493 4793 scope.go:117] "RemoveContainer" containerID="aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251" Jan 30 14:42:43 crc kubenswrapper[4793]: E0130 14:42:43.582777 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251\": container with ID starting with aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251 not found: ID does not exist" containerID="aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.582819 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251"} err="failed to get container status \"aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251\": rpc error: code = NotFound desc = could not find container \"aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251\": container with ID starting with aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251 not found: ID does not exist" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.582846 4793 scope.go:117] "RemoveContainer" containerID="02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac" Jan 30 14:42:43 crc kubenswrapper[4793]: E0130 14:42:43.583230 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac\": container with ID starting with 02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac not found: ID does not exist" containerID="02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.583254 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac"} err="failed to get container status \"02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac\": rpc error: code = NotFound desc = could not find container \"02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac\": container with ID starting with 02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac not found: ID does not exist" Jan 30 14:42:44 crc kubenswrapper[4793]: I0130 14:42:44.408841 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" path="/var/lib/kubelet/pods/eaf6755c-f96b-44cd-a05b-10f4420c18b8/volumes" Jan 30 14:43:42 crc kubenswrapper[4793]: I0130 14:43:42.414358 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:43:42 crc kubenswrapper[4793]: I0130 14:43:42.414956 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:44:12 crc kubenswrapper[4793]: I0130 14:44:12.413948 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:44:12 crc kubenswrapper[4793]: I0130 14:44:12.414589 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.413215 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.413747 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.418703 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.419523 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.419599 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" gracePeriod=600 Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.628738 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" exitCode=0 Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.628786 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716"} Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.628823 4793 scope.go:117] "RemoveContainer" containerID="3b40ff1ad28b890993e7464fb184af4aaf6269d300ea0eb233400b2a844450cc" Jan 30 14:44:42 crc kubenswrapper[4793]: E0130 14:44:42.898614 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:44:43 crc kubenswrapper[4793]: I0130 14:44:43.640019 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:44:43 crc kubenswrapper[4793]: E0130 14:44:43.640347 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:44:54 crc kubenswrapper[4793]: I0130 14:44:54.398822 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:44:54 crc kubenswrapper[4793]: E0130 14:44:54.399599 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.318110 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r"] Jan 30 14:45:00 crc kubenswrapper[4793]: E0130 14:45:00.319110 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="extract-content" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.319126 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="extract-content" Jan 30 14:45:00 crc kubenswrapper[4793]: E0130 14:45:00.319136 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.319143 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" Jan 30 14:45:00 crc kubenswrapper[4793]: E0130 14:45:00.319152 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="extract-utilities" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.319160 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="extract-utilities" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.319328 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.319948 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.321685 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.323111 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.328843 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r"] Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.446751 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkzl7\" (UniqueName: \"kubernetes.io/projected/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-kube-api-access-wkzl7\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.446854 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-config-volume\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.447171 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-secret-volume\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.548822 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkzl7\" (UniqueName: \"kubernetes.io/projected/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-kube-api-access-wkzl7\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.548902 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-config-volume\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.549003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-secret-volume\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.550974 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-config-volume\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.555759 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-secret-volume\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.567267 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkzl7\" (UniqueName: \"kubernetes.io/projected/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-kube-api-access-wkzl7\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.644397 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:01 crc kubenswrapper[4793]: I0130 14:45:01.139133 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r"] Jan 30 14:45:01 crc kubenswrapper[4793]: I0130 14:45:01.796035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" event={"ID":"1c63ff2c-cb24-48c2-9af7-05d299d8b36a","Type":"ContainerStarted","Data":"2bb7033c2b6902fe7f3fb960e4da2010748828c26715bef2cd982381fe406b45"} Jan 30 14:45:01 crc kubenswrapper[4793]: I0130 14:45:01.796456 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" event={"ID":"1c63ff2c-cb24-48c2-9af7-05d299d8b36a","Type":"ContainerStarted","Data":"8399c4ec038355d07dc866d370901380876d74943e2335ba1ab215513cac63aa"} Jan 30 14:45:01 crc kubenswrapper[4793]: I0130 14:45:01.816646 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" podStartSLOduration=1.816626997 podStartE2EDuration="1.816626997s" podCreationTimestamp="2026-01-30 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:45:01.816540035 +0000 UTC m=+3712.517888526" watchObservedRunningTime="2026-01-30 14:45:01.816626997 +0000 UTC m=+3712.517975488" Jan 30 14:45:02 crc kubenswrapper[4793]: I0130 14:45:02.807414 4793 generic.go:334] "Generic (PLEG): container finished" podID="1c63ff2c-cb24-48c2-9af7-05d299d8b36a" containerID="2bb7033c2b6902fe7f3fb960e4da2010748828c26715bef2cd982381fe406b45" exitCode=0 Jan 30 14:45:02 crc kubenswrapper[4793]: I0130 14:45:02.807469 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" event={"ID":"1c63ff2c-cb24-48c2-9af7-05d299d8b36a","Type":"ContainerDied","Data":"2bb7033c2b6902fe7f3fb960e4da2010748828c26715bef2cd982381fe406b45"} Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.289902 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.424640 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-secret-volume\") pod \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.424801 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-config-volume\") pod \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.424856 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkzl7\" (UniqueName: \"kubernetes.io/projected/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-kube-api-access-wkzl7\") pod \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.425534 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-config-volume" (OuterVolumeSpecName: "config-volume") pod "1c63ff2c-cb24-48c2-9af7-05d299d8b36a" (UID: "1c63ff2c-cb24-48c2-9af7-05d299d8b36a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.432181 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-kube-api-access-wkzl7" (OuterVolumeSpecName: "kube-api-access-wkzl7") pod "1c63ff2c-cb24-48c2-9af7-05d299d8b36a" (UID: "1c63ff2c-cb24-48c2-9af7-05d299d8b36a"). InnerVolumeSpecName "kube-api-access-wkzl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.434376 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1c63ff2c-cb24-48c2-9af7-05d299d8b36a" (UID: "1c63ff2c-cb24-48c2-9af7-05d299d8b36a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.526908 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.526950 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkzl7\" (UniqueName: \"kubernetes.io/projected/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-kube-api-access-wkzl7\") on node \"crc\" DevicePath \"\"" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.526966 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.825663 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" event={"ID":"1c63ff2c-cb24-48c2-9af7-05d299d8b36a","Type":"ContainerDied","Data":"8399c4ec038355d07dc866d370901380876d74943e2335ba1ab215513cac63aa"} Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.825880 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8399c4ec038355d07dc866d370901380876d74943e2335ba1ab215513cac63aa" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.825771 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.925382 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk"] Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.940429 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk"] Jan 30 14:45:06 crc kubenswrapper[4793]: I0130 14:45:06.408334 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0262a970-62b2-47c1-93bf-1e4455a999bf" path="/var/lib/kubelet/pods/0262a970-62b2-47c1-93bf-1e4455a999bf/volumes" Jan 30 14:45:07 crc kubenswrapper[4793]: I0130 14:45:07.398863 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:45:07 crc kubenswrapper[4793]: E0130 14:45:07.399447 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:45:20 crc kubenswrapper[4793]: I0130 14:45:20.404269 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:45:20 crc kubenswrapper[4793]: E0130 14:45:20.404962 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:45:34 crc kubenswrapper[4793]: I0130 14:45:34.398815 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:45:34 crc kubenswrapper[4793]: E0130 14:45:34.399584 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:45:45 crc kubenswrapper[4793]: I0130 14:45:45.398947 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:45:45 crc kubenswrapper[4793]: E0130 14:45:45.399824 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:45:57 crc kubenswrapper[4793]: I0130 14:45:57.398719 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:45:57 crc kubenswrapper[4793]: E0130 14:45:57.399499 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:45:59 crc kubenswrapper[4793]: I0130 14:45:59.362111 4793 scope.go:117] "RemoveContainer" containerID="21efee8d4521693281692f27a68228834ba45b6ab82173ff835a52b2e30855b1" Jan 30 14:46:10 crc kubenswrapper[4793]: I0130 14:46:10.406131 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:46:10 crc kubenswrapper[4793]: E0130 14:46:10.407258 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:46:25 crc kubenswrapper[4793]: I0130 14:46:25.398116 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:46:25 crc kubenswrapper[4793]: E0130 14:46:25.398713 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:46:39 crc kubenswrapper[4793]: I0130 14:46:39.398395 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:46:39 crc kubenswrapper[4793]: E0130 14:46:39.399208 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:46:54 crc kubenswrapper[4793]: I0130 14:46:54.402037 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:46:54 crc kubenswrapper[4793]: E0130 14:46:54.404464 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:47:07 crc kubenswrapper[4793]: I0130 14:47:07.398085 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:47:07 crc kubenswrapper[4793]: E0130 14:47:07.398904 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:47:20 crc kubenswrapper[4793]: I0130 14:47:20.406628 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:47:20 crc kubenswrapper[4793]: E0130 14:47:20.407773 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:47:32 crc kubenswrapper[4793]: I0130 14:47:32.399040 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:47:32 crc kubenswrapper[4793]: E0130 14:47:32.399847 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:47:47 crc kubenswrapper[4793]: I0130 14:47:47.398975 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:47:47 crc kubenswrapper[4793]: E0130 14:47:47.400372 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:47:58 crc kubenswrapper[4793]: I0130 14:47:58.398388 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:47:58 crc kubenswrapper[4793]: E0130 14:47:58.399101 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:48:12 crc kubenswrapper[4793]: I0130 14:48:12.397980 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:48:12 crc kubenswrapper[4793]: E0130 14:48:12.398876 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:48:26 crc kubenswrapper[4793]: I0130 14:48:26.398428 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:48:26 crc kubenswrapper[4793]: E0130 14:48:26.399501 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:48:41 crc kubenswrapper[4793]: I0130 14:48:41.398483 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:48:41 crc kubenswrapper[4793]: E0130 14:48:41.399118 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:48:53 crc kubenswrapper[4793]: I0130 14:48:53.399163 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:48:53 crc kubenswrapper[4793]: E0130 14:48:53.400029 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:49:07 crc kubenswrapper[4793]: I0130 14:49:07.398395 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:49:07 crc kubenswrapper[4793]: E0130 14:49:07.399606 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:49:19 crc kubenswrapper[4793]: I0130 14:49:19.642162 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:49:19 crc kubenswrapper[4793]: E0130 14:49:19.642824 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:49:33 crc kubenswrapper[4793]: I0130 14:49:33.398752 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:49:33 crc kubenswrapper[4793]: E0130 14:49:33.399944 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:49:47 crc kubenswrapper[4793]: I0130 14:49:47.397929 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:49:47 crc kubenswrapper[4793]: I0130 14:49:47.933623 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"cba2547b17c36e42af8677cd2bf7d48cb12f8208373936d3d3c20ac5c406aba2"} Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.448600 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gf56s"] Jan 30 14:49:55 crc kubenswrapper[4793]: E0130 14:49:55.458551 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c63ff2c-cb24-48c2-9af7-05d299d8b36a" containerName="collect-profiles" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.458638 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c63ff2c-cb24-48c2-9af7-05d299d8b36a" containerName="collect-profiles" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.458908 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c63ff2c-cb24-48c2-9af7-05d299d8b36a" containerName="collect-profiles" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.460399 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.464801 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gf56s"] Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.606361 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-utilities\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.606523 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-catalog-content\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.606551 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv2rf\" (UniqueName: \"kubernetes.io/projected/b58c525f-70f3-4640-a57c-9de37b17e01c-kube-api-access-lv2rf\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.708173 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-catalog-content\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.708225 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv2rf\" (UniqueName: \"kubernetes.io/projected/b58c525f-70f3-4640-a57c-9de37b17e01c-kube-api-access-lv2rf\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.708272 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-utilities\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.708690 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-utilities\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.708898 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-catalog-content\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.731940 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv2rf\" (UniqueName: \"kubernetes.io/projected/b58c525f-70f3-4640-a57c-9de37b17e01c-kube-api-access-lv2rf\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.818357 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:56 crc kubenswrapper[4793]: I0130 14:49:56.410651 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gf56s"] Jan 30 14:49:57 crc kubenswrapper[4793]: I0130 14:49:57.012721 4793 generic.go:334] "Generic (PLEG): container finished" podID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerID="42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8" exitCode=0 Jan 30 14:49:57 crc kubenswrapper[4793]: I0130 14:49:57.012893 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerDied","Data":"42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8"} Jan 30 14:49:57 crc kubenswrapper[4793]: I0130 14:49:57.013018 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerStarted","Data":"eb8aba70dedaa058f3a16e5f14146fe310d30f48bd736ec9df6877aa331a5240"} Jan 30 14:49:57 crc kubenswrapper[4793]: I0130 14:49:57.014946 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:49:59 crc kubenswrapper[4793]: I0130 14:49:59.043852 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerStarted","Data":"365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640"} Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.437024 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jlnlv"] Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.439268 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.465291 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlnlv"] Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.501083 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-utilities\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.501347 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-catalog-content\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.501549 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj4bp\" (UniqueName: \"kubernetes.io/projected/4fa1a794-f8b8-400b-b829-57f761da53bf-kube-api-access-mj4bp\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.603363 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj4bp\" (UniqueName: \"kubernetes.io/projected/4fa1a794-f8b8-400b-b829-57f761da53bf-kube-api-access-mj4bp\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.603499 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-utilities\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.603545 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-catalog-content\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.604136 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-catalog-content\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.604469 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-utilities\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.729258 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj4bp\" (UniqueName: \"kubernetes.io/projected/4fa1a794-f8b8-400b-b829-57f761da53bf-kube-api-access-mj4bp\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.771084 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:01 crc kubenswrapper[4793]: I0130 14:50:01.459521 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlnlv"] Jan 30 14:50:02 crc kubenswrapper[4793]: I0130 14:50:02.075102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerStarted","Data":"32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c"} Jan 30 14:50:02 crc kubenswrapper[4793]: I0130 14:50:02.075640 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerStarted","Data":"6169685a639926301d571b04cc5d15f21a0a9d940ee376e0840462ee49a612de"} Jan 30 14:50:03 crc kubenswrapper[4793]: I0130 14:50:03.085061 4793 generic.go:334] "Generic (PLEG): container finished" podID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerID="32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c" exitCode=0 Jan 30 14:50:03 crc kubenswrapper[4793]: I0130 14:50:03.085134 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerDied","Data":"32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c"} Jan 30 14:50:05 crc kubenswrapper[4793]: I0130 14:50:05.122890 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerStarted","Data":"5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2"} Jan 30 14:50:07 crc kubenswrapper[4793]: I0130 14:50:07.142839 4793 generic.go:334] "Generic (PLEG): container finished" podID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerID="5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2" exitCode=0 Jan 30 14:50:07 crc kubenswrapper[4793]: I0130 14:50:07.143378 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerDied","Data":"5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2"} Jan 30 14:50:08 crc kubenswrapper[4793]: I0130 14:50:08.828989 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" containerName="galera" probeResult="failure" output="command timed out" Jan 30 14:50:08 crc kubenswrapper[4793]: I0130 14:50:08.829236 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" containerName="galera" probeResult="failure" output="command timed out" Jan 30 14:50:12 crc kubenswrapper[4793]: I0130 14:50:12.194563 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerStarted","Data":"6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80"} Jan 30 14:50:12 crc kubenswrapper[4793]: I0130 14:50:12.224843 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jlnlv" podStartSLOduration=4.122386512 podStartE2EDuration="12.224821567s" podCreationTimestamp="2026-01-30 14:50:00 +0000 UTC" firstStartedPulling="2026-01-30 14:50:03.087413177 +0000 UTC m=+4013.788761668" lastFinishedPulling="2026-01-30 14:50:11.189848232 +0000 UTC m=+4021.891196723" observedRunningTime="2026-01-30 14:50:12.220810599 +0000 UTC m=+4022.922159090" watchObservedRunningTime="2026-01-30 14:50:12.224821567 +0000 UTC m=+4022.926170058" Jan 30 14:50:14 crc kubenswrapper[4793]: I0130 14:50:14.217108 4793 generic.go:334] "Generic (PLEG): container finished" podID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerID="365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640" exitCode=0 Jan 30 14:50:14 crc kubenswrapper[4793]: I0130 14:50:14.217182 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerDied","Data":"365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640"} Jan 30 14:50:16 crc kubenswrapper[4793]: I0130 14:50:16.240534 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerStarted","Data":"fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5"} Jan 30 14:50:16 crc kubenswrapper[4793]: I0130 14:50:16.267104 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gf56s" podStartSLOduration=3.672785386 podStartE2EDuration="21.267084663s" podCreationTimestamp="2026-01-30 14:49:55 +0000 UTC" firstStartedPulling="2026-01-30 14:49:57.014665867 +0000 UTC m=+4007.716014358" lastFinishedPulling="2026-01-30 14:50:14.608965144 +0000 UTC m=+4025.310313635" observedRunningTime="2026-01-30 14:50:16.261021924 +0000 UTC m=+4026.962370425" watchObservedRunningTime="2026-01-30 14:50:16.267084663 +0000 UTC m=+4026.968433154" Jan 30 14:50:20 crc kubenswrapper[4793]: I0130 14:50:20.772248 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:20 crc kubenswrapper[4793]: I0130 14:50:20.772802 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:21 crc kubenswrapper[4793]: I0130 14:50:21.838291 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jlnlv" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="registry-server" probeResult="failure" output=< Jan 30 14:50:21 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:50:21 crc kubenswrapper[4793]: > Jan 30 14:50:25 crc kubenswrapper[4793]: I0130 14:50:25.819549 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:50:25 crc kubenswrapper[4793]: I0130 14:50:25.820123 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:50:26 crc kubenswrapper[4793]: I0130 14:50:26.881483 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gf56s" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" probeResult="failure" output=< Jan 30 14:50:26 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:50:26 crc kubenswrapper[4793]: > Jan 30 14:50:30 crc kubenswrapper[4793]: I0130 14:50:30.826701 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:30 crc kubenswrapper[4793]: I0130 14:50:30.877931 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:31 crc kubenswrapper[4793]: I0130 14:50:31.637030 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlnlv"] Jan 30 14:50:32 crc kubenswrapper[4793]: I0130 14:50:32.361326 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jlnlv" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="registry-server" containerID="cri-o://6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80" gracePeriod=2 Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.245333 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.248425 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-catalog-content\") pod \"4fa1a794-f8b8-400b-b829-57f761da53bf\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.248496 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj4bp\" (UniqueName: \"kubernetes.io/projected/4fa1a794-f8b8-400b-b829-57f761da53bf-kube-api-access-mj4bp\") pod \"4fa1a794-f8b8-400b-b829-57f761da53bf\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.248598 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-utilities\") pod \"4fa1a794-f8b8-400b-b829-57f761da53bf\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.249488 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-utilities" (OuterVolumeSpecName: "utilities") pod "4fa1a794-f8b8-400b-b829-57f761da53bf" (UID: "4fa1a794-f8b8-400b-b829-57f761da53bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.256824 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa1a794-f8b8-400b-b829-57f761da53bf-kube-api-access-mj4bp" (OuterVolumeSpecName: "kube-api-access-mj4bp") pod "4fa1a794-f8b8-400b-b829-57f761da53bf" (UID: "4fa1a794-f8b8-400b-b829-57f761da53bf"). InnerVolumeSpecName "kube-api-access-mj4bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.291514 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fa1a794-f8b8-400b-b829-57f761da53bf" (UID: "4fa1a794-f8b8-400b-b829-57f761da53bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.350699 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.350743 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj4bp\" (UniqueName: \"kubernetes.io/projected/4fa1a794-f8b8-400b-b829-57f761da53bf-kube-api-access-mj4bp\") on node \"crc\" DevicePath \"\"" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.350781 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.383471 4793 generic.go:334] "Generic (PLEG): container finished" podID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerID="6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80" exitCode=0 Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.383528 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.383550 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerDied","Data":"6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80"} Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.384472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerDied","Data":"6169685a639926301d571b04cc5d15f21a0a9d940ee376e0840462ee49a612de"} Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.384496 4793 scope.go:117] "RemoveContainer" containerID="6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.410879 4793 scope.go:117] "RemoveContainer" containerID="5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.434130 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlnlv"] Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.442956 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlnlv"] Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.495652 4793 scope.go:117] "RemoveContainer" containerID="32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.637018 4793 scope.go:117] "RemoveContainer" containerID="6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80" Jan 30 14:50:33 crc kubenswrapper[4793]: E0130 14:50:33.637517 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80\": container with ID starting with 6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80 not found: ID does not exist" containerID="6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.637547 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80"} err="failed to get container status \"6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80\": rpc error: code = NotFound desc = could not find container \"6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80\": container with ID starting with 6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80 not found: ID does not exist" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.637569 4793 scope.go:117] "RemoveContainer" containerID="5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2" Jan 30 14:50:33 crc kubenswrapper[4793]: E0130 14:50:33.637969 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2\": container with ID starting with 5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2 not found: ID does not exist" containerID="5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.638020 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2"} err="failed to get container status \"5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2\": rpc error: code = NotFound desc = could not find container \"5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2\": container with ID starting with 5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2 not found: ID does not exist" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.638072 4793 scope.go:117] "RemoveContainer" containerID="32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c" Jan 30 14:50:33 crc kubenswrapper[4793]: E0130 14:50:33.638539 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c\": container with ID starting with 32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c not found: ID does not exist" containerID="32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.638568 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c"} err="failed to get container status \"32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c\": rpc error: code = NotFound desc = could not find container \"32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c\": container with ID starting with 32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c not found: ID does not exist" Jan 30 14:50:34 crc kubenswrapper[4793]: I0130 14:50:34.409343 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" path="/var/lib/kubelet/pods/4fa1a794-f8b8-400b-b829-57f761da53bf/volumes" Jan 30 14:50:36 crc kubenswrapper[4793]: I0130 14:50:36.873856 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gf56s" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" probeResult="failure" output=< Jan 30 14:50:36 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:50:36 crc kubenswrapper[4793]: > Jan 30 14:50:46 crc kubenswrapper[4793]: I0130 14:50:46.876602 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gf56s" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" probeResult="failure" output=< Jan 30 14:50:46 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:50:46 crc kubenswrapper[4793]: > Jan 30 14:50:55 crc kubenswrapper[4793]: I0130 14:50:55.876665 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:50:55 crc kubenswrapper[4793]: I0130 14:50:55.957854 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:50:56 crc kubenswrapper[4793]: I0130 14:50:56.685551 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gf56s"] Jan 30 14:50:57 crc kubenswrapper[4793]: I0130 14:50:57.602604 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gf56s" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" containerID="cri-o://fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5" gracePeriod=2 Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.330366 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.461509 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv2rf\" (UniqueName: \"kubernetes.io/projected/b58c525f-70f3-4640-a57c-9de37b17e01c-kube-api-access-lv2rf\") pod \"b58c525f-70f3-4640-a57c-9de37b17e01c\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.461629 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-utilities\") pod \"b58c525f-70f3-4640-a57c-9de37b17e01c\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.461792 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-catalog-content\") pod \"b58c525f-70f3-4640-a57c-9de37b17e01c\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.470605 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-utilities" (OuterVolumeSpecName: "utilities") pod "b58c525f-70f3-4640-a57c-9de37b17e01c" (UID: "b58c525f-70f3-4640-a57c-9de37b17e01c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.471383 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.478581 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b58c525f-70f3-4640-a57c-9de37b17e01c-kube-api-access-lv2rf" (OuterVolumeSpecName: "kube-api-access-lv2rf") pod "b58c525f-70f3-4640-a57c-9de37b17e01c" (UID: "b58c525f-70f3-4640-a57c-9de37b17e01c"). InnerVolumeSpecName "kube-api-access-lv2rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.573174 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv2rf\" (UniqueName: \"kubernetes.io/projected/b58c525f-70f3-4640-a57c-9de37b17e01c-kube-api-access-lv2rf\") on node \"crc\" DevicePath \"\"" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.597695 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b58c525f-70f3-4640-a57c-9de37b17e01c" (UID: "b58c525f-70f3-4640-a57c-9de37b17e01c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.616301 4793 generic.go:334] "Generic (PLEG): container finished" podID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerID="fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5" exitCode=0 Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.616347 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerDied","Data":"fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5"} Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.616377 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerDied","Data":"eb8aba70dedaa058f3a16e5f14146fe310d30f48bd736ec9df6877aa331a5240"} Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.616395 4793 scope.go:117] "RemoveContainer" containerID="fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.616524 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.657901 4793 scope.go:117] "RemoveContainer" containerID="365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.661136 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gf56s"] Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.672298 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gf56s"] Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.675741 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.686400 4793 scope.go:117] "RemoveContainer" containerID="42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.719990 4793 scope.go:117] "RemoveContainer" containerID="fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5" Jan 30 14:50:58 crc kubenswrapper[4793]: E0130 14:50:58.721198 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5\": container with ID starting with fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5 not found: ID does not exist" containerID="fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.721232 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5"} err="failed to get container status \"fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5\": rpc error: code = NotFound desc = could not find container \"fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5\": container with ID starting with fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5 not found: ID does not exist" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.721271 4793 scope.go:117] "RemoveContainer" containerID="365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640" Jan 30 14:50:58 crc kubenswrapper[4793]: E0130 14:50:58.721732 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640\": container with ID starting with 365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640 not found: ID does not exist" containerID="365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.721781 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640"} err="failed to get container status \"365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640\": rpc error: code = NotFound desc = could not find container \"365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640\": container with ID starting with 365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640 not found: ID does not exist" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.721812 4793 scope.go:117] "RemoveContainer" containerID="42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8" Jan 30 14:50:58 crc kubenswrapper[4793]: E0130 14:50:58.723370 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8\": container with ID starting with 42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8 not found: ID does not exist" containerID="42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.723415 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8"} err="failed to get container status \"42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8\": rpc error: code = NotFound desc = could not find container \"42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8\": container with ID starting with 42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8 not found: ID does not exist" Jan 30 14:51:00 crc kubenswrapper[4793]: I0130 14:51:00.409561 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" path="/var/lib/kubelet/pods/b58c525f-70f3-4640-a57c-9de37b17e01c/volumes" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.229475 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r6cbd"] Jan 30 14:52:04 crc kubenswrapper[4793]: E0130 14:52:04.230411 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="extract-content" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230428 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="extract-content" Jan 30 14:52:04 crc kubenswrapper[4793]: E0130 14:52:04.230444 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="extract-utilities" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230451 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="extract-utilities" Jan 30 14:52:04 crc kubenswrapper[4793]: E0130 14:52:04.230472 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="registry-server" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230479 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="registry-server" Jan 30 14:52:04 crc kubenswrapper[4793]: E0130 14:52:04.230497 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="extract-content" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230504 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="extract-content" Jan 30 14:52:04 crc kubenswrapper[4793]: E0130 14:52:04.230516 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="extract-utilities" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230524 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="extract-utilities" Jan 30 14:52:04 crc kubenswrapper[4793]: E0130 14:52:04.230532 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230538 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230760 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="registry-server" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230783 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.232638 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.250875 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6cbd"] Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.370537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-catalog-content\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.370764 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2pnj\" (UniqueName: \"kubernetes.io/projected/e7b63510-a909-4a19-83a9-7aeeae35c681-kube-api-access-m2pnj\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.370837 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-utilities\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.473214 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-catalog-content\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.473339 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2pnj\" (UniqueName: \"kubernetes.io/projected/e7b63510-a909-4a19-83a9-7aeeae35c681-kube-api-access-m2pnj\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.473371 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-utilities\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.473745 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-catalog-content\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.473934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-utilities\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.494820 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2pnj\" (UniqueName: \"kubernetes.io/projected/e7b63510-a909-4a19-83a9-7aeeae35c681-kube-api-access-m2pnj\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.555701 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:05 crc kubenswrapper[4793]: I0130 14:52:05.198104 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6cbd"] Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.194138 4793 generic.go:334] "Generic (PLEG): container finished" podID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerID="7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce" exitCode=0 Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.194190 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerDied","Data":"7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce"} Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.195580 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerStarted","Data":"7d281de4bd80a47645e1191b1a907101005c1f6da7441fccffb894aceeed7a41"} Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.233961 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nc58f"] Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.240626 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.286491 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nc58f"] Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.315304 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-utilities\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.315456 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-catalog-content\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.315631 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx2bn\" (UniqueName: \"kubernetes.io/projected/53fa7ee2-40c6-42b2-83e7-91560b4ae614-kube-api-access-bx2bn\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.418286 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-utilities\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.418411 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-catalog-content\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.418453 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx2bn\" (UniqueName: \"kubernetes.io/projected/53fa7ee2-40c6-42b2-83e7-91560b4ae614-kube-api-access-bx2bn\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.418973 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-utilities\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.419024 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-catalog-content\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.442824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx2bn\" (UniqueName: \"kubernetes.io/projected/53fa7ee2-40c6-42b2-83e7-91560b4ae614-kube-api-access-bx2bn\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.570448 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:07 crc kubenswrapper[4793]: I0130 14:52:07.231758 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nc58f"] Jan 30 14:52:08 crc kubenswrapper[4793]: I0130 14:52:08.219788 4793 generic.go:334] "Generic (PLEG): container finished" podID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerID="9797cc08d205a357f7341259f74f234a05068c9223b29e62a420e0ce3c9ec65f" exitCode=0 Jan 30 14:52:08 crc kubenswrapper[4793]: I0130 14:52:08.219919 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerDied","Data":"9797cc08d205a357f7341259f74f234a05068c9223b29e62a420e0ce3c9ec65f"} Jan 30 14:52:08 crc kubenswrapper[4793]: I0130 14:52:08.220294 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerStarted","Data":"7b8fab036f2c800bfde40ab7395dabfb3875fce049341b6a53bcba807f11ac44"} Jan 30 14:52:08 crc kubenswrapper[4793]: I0130 14:52:08.226192 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerStarted","Data":"a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb"} Jan 30 14:52:10 crc kubenswrapper[4793]: I0130 14:52:10.260075 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerStarted","Data":"4fac99a830596ab4c8ccd92b20b16f13dd985af78b405e7b37963e7f8429ddf5"} Jan 30 14:52:10 crc kubenswrapper[4793]: I0130 14:52:10.262623 4793 generic.go:334] "Generic (PLEG): container finished" podID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerID="a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb" exitCode=0 Jan 30 14:52:10 crc kubenswrapper[4793]: I0130 14:52:10.262679 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerDied","Data":"a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb"} Jan 30 14:52:12 crc kubenswrapper[4793]: I0130 14:52:12.284849 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerStarted","Data":"7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435"} Jan 30 14:52:12 crc kubenswrapper[4793]: I0130 14:52:12.304705 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r6cbd" podStartSLOduration=2.462898813 podStartE2EDuration="8.304686156s" podCreationTimestamp="2026-01-30 14:52:04 +0000 UTC" firstStartedPulling="2026-01-30 14:52:06.196885348 +0000 UTC m=+4136.898233829" lastFinishedPulling="2026-01-30 14:52:12.038672671 +0000 UTC m=+4142.740021172" observedRunningTime="2026-01-30 14:52:12.301841586 +0000 UTC m=+4143.003190097" watchObservedRunningTime="2026-01-30 14:52:12.304686156 +0000 UTC m=+4143.006034647" Jan 30 14:52:12 crc kubenswrapper[4793]: I0130 14:52:12.413780 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:52:12 crc kubenswrapper[4793]: I0130 14:52:12.413854 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:52:14 crc kubenswrapper[4793]: I0130 14:52:14.555883 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:14 crc kubenswrapper[4793]: I0130 14:52:14.556152 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:14 crc kubenswrapper[4793]: I0130 14:52:14.612391 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:17 crc kubenswrapper[4793]: I0130 14:52:17.337109 4793 generic.go:334] "Generic (PLEG): container finished" podID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerID="4fac99a830596ab4c8ccd92b20b16f13dd985af78b405e7b37963e7f8429ddf5" exitCode=0 Jan 30 14:52:17 crc kubenswrapper[4793]: I0130 14:52:17.337477 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerDied","Data":"4fac99a830596ab4c8ccd92b20b16f13dd985af78b405e7b37963e7f8429ddf5"} Jan 30 14:52:18 crc kubenswrapper[4793]: I0130 14:52:18.364563 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerStarted","Data":"8483acdb27d6a9e9c65f4dd466fd68c3f03a2b90fd7995dcc8394d42f7515fb8"} Jan 30 14:52:18 crc kubenswrapper[4793]: I0130 14:52:18.410121 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nc58f" podStartSLOduration=2.856833559 podStartE2EDuration="12.410100164s" podCreationTimestamp="2026-01-30 14:52:06 +0000 UTC" firstStartedPulling="2026-01-30 14:52:08.225244568 +0000 UTC m=+4138.926593059" lastFinishedPulling="2026-01-30 14:52:17.778511173 +0000 UTC m=+4148.479859664" observedRunningTime="2026-01-30 14:52:18.395331642 +0000 UTC m=+4149.096680133" watchObservedRunningTime="2026-01-30 14:52:18.410100164 +0000 UTC m=+4149.111448655" Jan 30 14:52:24 crc kubenswrapper[4793]: I0130 14:52:24.609274 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:26 crc kubenswrapper[4793]: I0130 14:52:26.570665 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:26 crc kubenswrapper[4793]: I0130 14:52:26.576179 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:26 crc kubenswrapper[4793]: I0130 14:52:26.631833 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:27 crc kubenswrapper[4793]: I0130 14:52:27.503920 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:27 crc kubenswrapper[4793]: I0130 14:52:27.586621 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r6cbd"] Jan 30 14:52:27 crc kubenswrapper[4793]: I0130 14:52:27.586889 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r6cbd" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="registry-server" containerID="cri-o://7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435" gracePeriod=2 Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.458984 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.460409 4793 generic.go:334] "Generic (PLEG): container finished" podID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerID="7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435" exitCode=0 Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.460464 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerDied","Data":"7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435"} Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.461715 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerDied","Data":"7d281de4bd80a47645e1191b1a907101005c1f6da7441fccffb894aceeed7a41"} Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.461737 4793 scope.go:117] "RemoveContainer" containerID="7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.482586 4793 scope.go:117] "RemoveContainer" containerID="a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.543173 4793 scope.go:117] "RemoveContainer" containerID="7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.561770 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-catalog-content\") pod \"e7b63510-a909-4a19-83a9-7aeeae35c681\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.561908 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2pnj\" (UniqueName: \"kubernetes.io/projected/e7b63510-a909-4a19-83a9-7aeeae35c681-kube-api-access-m2pnj\") pod \"e7b63510-a909-4a19-83a9-7aeeae35c681\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.561944 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-utilities\") pod \"e7b63510-a909-4a19-83a9-7aeeae35c681\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.562862 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-utilities" (OuterVolumeSpecName: "utilities") pod "e7b63510-a909-4a19-83a9-7aeeae35c681" (UID: "e7b63510-a909-4a19-83a9-7aeeae35c681"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.577895 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b63510-a909-4a19-83a9-7aeeae35c681-kube-api-access-m2pnj" (OuterVolumeSpecName: "kube-api-access-m2pnj") pod "e7b63510-a909-4a19-83a9-7aeeae35c681" (UID: "e7b63510-a909-4a19-83a9-7aeeae35c681"). InnerVolumeSpecName "kube-api-access-m2pnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.586955 4793 scope.go:117] "RemoveContainer" containerID="7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435" Jan 30 14:52:28 crc kubenswrapper[4793]: E0130 14:52:28.588288 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435\": container with ID starting with 7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435 not found: ID does not exist" containerID="7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.588341 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435"} err="failed to get container status \"7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435\": rpc error: code = NotFound desc = could not find container \"7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435\": container with ID starting with 7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435 not found: ID does not exist" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.588361 4793 scope.go:117] "RemoveContainer" containerID="a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb" Jan 30 14:52:28 crc kubenswrapper[4793]: E0130 14:52:28.588723 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb\": container with ID starting with a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb not found: ID does not exist" containerID="a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.588756 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb"} err="failed to get container status \"a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb\": rpc error: code = NotFound desc = could not find container \"a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb\": container with ID starting with a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb not found: ID does not exist" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.588781 4793 scope.go:117] "RemoveContainer" containerID="7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce" Jan 30 14:52:28 crc kubenswrapper[4793]: E0130 14:52:28.588992 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce\": container with ID starting with 7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce not found: ID does not exist" containerID="7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.589009 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce"} err="failed to get container status \"7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce\": rpc error: code = NotFound desc = could not find container \"7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce\": container with ID starting with 7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce not found: ID does not exist" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.618586 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7b63510-a909-4a19-83a9-7aeeae35c681" (UID: "e7b63510-a909-4a19-83a9-7aeeae35c681"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.664671 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.664725 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2pnj\" (UniqueName: \"kubernetes.io/projected/e7b63510-a909-4a19-83a9-7aeeae35c681-kube-api-access-m2pnj\") on node \"crc\" DevicePath \"\"" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.664743 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:52:29 crc kubenswrapper[4793]: I0130 14:52:29.471355 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:29 crc kubenswrapper[4793]: I0130 14:52:29.525074 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r6cbd"] Jan 30 14:52:29 crc kubenswrapper[4793]: I0130 14:52:29.535305 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r6cbd"] Jan 30 14:52:30 crc kubenswrapper[4793]: I0130 14:52:30.172158 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nc58f"] Jan 30 14:52:30 crc kubenswrapper[4793]: I0130 14:52:30.410669 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" path="/var/lib/kubelet/pods/e7b63510-a909-4a19-83a9-7aeeae35c681/volumes" Jan 30 14:52:30 crc kubenswrapper[4793]: I0130 14:52:30.480196 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nc58f" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="registry-server" containerID="cri-o://8483acdb27d6a9e9c65f4dd466fd68c3f03a2b90fd7995dcc8394d42f7515fb8" gracePeriod=2 Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.508929 4793 generic.go:334] "Generic (PLEG): container finished" podID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerID="8483acdb27d6a9e9c65f4dd466fd68c3f03a2b90fd7995dcc8394d42f7515fb8" exitCode=0 Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.509002 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerDied","Data":"8483acdb27d6a9e9c65f4dd466fd68c3f03a2b90fd7995dcc8394d42f7515fb8"} Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.509299 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerDied","Data":"7b8fab036f2c800bfde40ab7395dabfb3875fce049341b6a53bcba807f11ac44"} Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.509319 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b8fab036f2c800bfde40ab7395dabfb3875fce049341b6a53bcba807f11ac44" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.525187 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.656846 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-utilities\") pod \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.656963 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-catalog-content\") pod \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.657007 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx2bn\" (UniqueName: \"kubernetes.io/projected/53fa7ee2-40c6-42b2-83e7-91560b4ae614-kube-api-access-bx2bn\") pod \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.660902 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-utilities" (OuterVolumeSpecName: "utilities") pod "53fa7ee2-40c6-42b2-83e7-91560b4ae614" (UID: "53fa7ee2-40c6-42b2-83e7-91560b4ae614"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.664242 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53fa7ee2-40c6-42b2-83e7-91560b4ae614-kube-api-access-bx2bn" (OuterVolumeSpecName: "kube-api-access-bx2bn") pod "53fa7ee2-40c6-42b2-83e7-91560b4ae614" (UID: "53fa7ee2-40c6-42b2-83e7-91560b4ae614"). InnerVolumeSpecName "kube-api-access-bx2bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.715094 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53fa7ee2-40c6-42b2-83e7-91560b4ae614" (UID: "53fa7ee2-40c6-42b2-83e7-91560b4ae614"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.760019 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.760071 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.760084 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx2bn\" (UniqueName: \"kubernetes.io/projected/53fa7ee2-40c6-42b2-83e7-91560b4ae614-kube-api-access-bx2bn\") on node \"crc\" DevicePath \"\"" Jan 30 14:52:32 crc kubenswrapper[4793]: I0130 14:52:32.517116 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:32 crc kubenswrapper[4793]: I0130 14:52:32.540262 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nc58f"] Jan 30 14:52:32 crc kubenswrapper[4793]: I0130 14:52:32.550021 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nc58f"] Jan 30 14:52:34 crc kubenswrapper[4793]: I0130 14:52:34.408914 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" path="/var/lib/kubelet/pods/53fa7ee2-40c6-42b2-83e7-91560b4ae614/volumes" Jan 30 14:52:42 crc kubenswrapper[4793]: I0130 14:52:42.414241 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:52:42 crc kubenswrapper[4793]: I0130 14:52:42.414768 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.413715 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.414362 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.414416 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.415248 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cba2547b17c36e42af8677cd2bf7d48cb12f8208373936d3d3c20ac5c406aba2"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.415351 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://cba2547b17c36e42af8677cd2bf7d48cb12f8208373936d3d3c20ac5c406aba2" gracePeriod=600 Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.870537 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="cba2547b17c36e42af8677cd2bf7d48cb12f8208373936d3d3c20ac5c406aba2" exitCode=0 Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.870815 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"cba2547b17c36e42af8677cd2bf7d48cb12f8208373936d3d3c20ac5c406aba2"} Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.870841 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552"} Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.870857 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:55:12 crc kubenswrapper[4793]: I0130 14:55:12.413755 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:55:12 crc kubenswrapper[4793]: I0130 14:55:12.414367 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:55:42 crc kubenswrapper[4793]: I0130 14:55:42.413972 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:55:42 crc kubenswrapper[4793]: I0130 14:55:42.414552 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:56:12 crc kubenswrapper[4793]: I0130 14:56:12.414230 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:56:12 crc kubenswrapper[4793]: I0130 14:56:12.414786 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:56:12 crc kubenswrapper[4793]: I0130 14:56:12.414826 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:56:12 crc kubenswrapper[4793]: I0130 14:56:12.415628 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:56:12 crc kubenswrapper[4793]: I0130 14:56:12.415696 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" gracePeriod=600 Jan 30 14:56:12 crc kubenswrapper[4793]: E0130 14:56:12.623661 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:56:13 crc kubenswrapper[4793]: I0130 14:56:13.499170 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" exitCode=0 Jan 30 14:56:13 crc kubenswrapper[4793]: I0130 14:56:13.499238 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552"} Jan 30 14:56:13 crc kubenswrapper[4793]: I0130 14:56:13.500210 4793 scope.go:117] "RemoveContainer" containerID="cba2547b17c36e42af8677cd2bf7d48cb12f8208373936d3d3c20ac5c406aba2" Jan 30 14:56:13 crc kubenswrapper[4793]: I0130 14:56:13.501103 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:56:13 crc kubenswrapper[4793]: E0130 14:56:13.501382 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:56:25 crc kubenswrapper[4793]: I0130 14:56:25.398297 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:56:25 crc kubenswrapper[4793]: E0130 14:56:25.399087 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:56:38 crc kubenswrapper[4793]: I0130 14:56:38.400323 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:56:38 crc kubenswrapper[4793]: E0130 14:56:38.401193 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:56:51 crc kubenswrapper[4793]: I0130 14:56:51.398397 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:56:51 crc kubenswrapper[4793]: E0130 14:56:51.399318 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:57:02 crc kubenswrapper[4793]: I0130 14:57:02.398322 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:57:02 crc kubenswrapper[4793]: E0130 14:57:02.399037 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:57:16 crc kubenswrapper[4793]: I0130 14:57:16.398142 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:57:16 crc kubenswrapper[4793]: E0130 14:57:16.398853 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:57:30 crc kubenswrapper[4793]: I0130 14:57:30.408065 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:57:30 crc kubenswrapper[4793]: E0130 14:57:30.408861 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:57:43 crc kubenswrapper[4793]: I0130 14:57:43.398764 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:57:43 crc kubenswrapper[4793]: E0130 14:57:43.399776 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:57:55 crc kubenswrapper[4793]: I0130 14:57:55.397905 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:57:55 crc kubenswrapper[4793]: E0130 14:57:55.398576 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:58:10 crc kubenswrapper[4793]: I0130 14:58:10.415937 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:58:10 crc kubenswrapper[4793]: E0130 14:58:10.416815 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:58:24 crc kubenswrapper[4793]: I0130 14:58:24.398449 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:58:24 crc kubenswrapper[4793]: E0130 14:58:24.399284 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:58:39 crc kubenswrapper[4793]: I0130 14:58:39.398813 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:58:39 crc kubenswrapper[4793]: E0130 14:58:39.400735 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:58:53 crc kubenswrapper[4793]: I0130 14:58:53.398908 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:58:53 crc kubenswrapper[4793]: E0130 14:58:53.399661 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:58:59 crc kubenswrapper[4793]: I0130 14:58:59.808516 4793 scope.go:117] "RemoveContainer" containerID="4fac99a830596ab4c8ccd92b20b16f13dd985af78b405e7b37963e7f8429ddf5" Jan 30 14:58:59 crc kubenswrapper[4793]: I0130 14:58:59.858401 4793 scope.go:117] "RemoveContainer" containerID="9797cc08d205a357f7341259f74f234a05068c9223b29e62a420e0ce3c9ec65f" Jan 30 14:58:59 crc kubenswrapper[4793]: I0130 14:58:59.895659 4793 scope.go:117] "RemoveContainer" containerID="8483acdb27d6a9e9c65f4dd466fd68c3f03a2b90fd7995dcc8394d42f7515fb8" Jan 30 14:59:05 crc kubenswrapper[4793]: I0130 14:59:05.398110 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:59:05 crc kubenswrapper[4793]: E0130 14:59:05.398951 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:59:16 crc kubenswrapper[4793]: I0130 14:59:16.398523 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:59:16 crc kubenswrapper[4793]: E0130 14:59:16.399384 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:59:29 crc kubenswrapper[4793]: I0130 14:59:29.398796 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:59:29 crc kubenswrapper[4793]: E0130 14:59:29.399605 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:59:40 crc kubenswrapper[4793]: I0130 14:59:40.404266 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:59:40 crc kubenswrapper[4793]: E0130 14:59:40.405111 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:59:53 crc kubenswrapper[4793]: I0130 14:59:53.399004 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:59:53 crc kubenswrapper[4793]: E0130 14:59:53.399723 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.187155 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc"] Jan 30 15:00:00 crc kubenswrapper[4793]: E0130 15:00:00.188091 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="extract-utilities" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188107 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="extract-utilities" Jan 30 15:00:00 crc kubenswrapper[4793]: E0130 15:00:00.188124 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="registry-server" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188132 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="registry-server" Jan 30 15:00:00 crc kubenswrapper[4793]: E0130 15:00:00.188167 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="extract-content" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188200 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="extract-content" Jan 30 15:00:00 crc kubenswrapper[4793]: E0130 15:00:00.188224 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="extract-utilities" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188232 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="extract-utilities" Jan 30 15:00:00 crc kubenswrapper[4793]: E0130 15:00:00.188252 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="extract-content" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188259 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="extract-content" Jan 30 15:00:00 crc kubenswrapper[4793]: E0130 15:00:00.188276 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="registry-server" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188283 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="registry-server" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188515 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="registry-server" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188530 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="registry-server" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.189321 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.195887 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc"] Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.232329 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.271490 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eaa1894-c4b7-4c79-955c-7b713cbe1955-config-volume\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.271580 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eaa1894-c4b7-4c79-955c-7b713cbe1955-secret-volume\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.271603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-898h6\" (UniqueName: \"kubernetes.io/projected/1eaa1894-c4b7-4c79-955c-7b713cbe1955-kube-api-access-898h6\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.373207 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eaa1894-c4b7-4c79-955c-7b713cbe1955-config-volume\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.373537 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eaa1894-c4b7-4c79-955c-7b713cbe1955-secret-volume\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.373615 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-898h6\" (UniqueName: \"kubernetes.io/projected/1eaa1894-c4b7-4c79-955c-7b713cbe1955-kube-api-access-898h6\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.374975 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eaa1894-c4b7-4c79-955c-7b713cbe1955-config-volume\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.392414 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eaa1894-c4b7-4c79-955c-7b713cbe1955-secret-volume\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.395876 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-898h6\" (UniqueName: \"kubernetes.io/projected/1eaa1894-c4b7-4c79-955c-7b713cbe1955-kube-api-access-898h6\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.412660 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.563871 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:01 crc kubenswrapper[4793]: I0130 15:00:01.044770 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc"] Jan 30 15:00:01 crc kubenswrapper[4793]: I0130 15:00:01.604816 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" event={"ID":"1eaa1894-c4b7-4c79-955c-7b713cbe1955","Type":"ContainerStarted","Data":"4237192bc7a1eb44289a5eeb0516108067794976041ba4876322f83681ec69f1"} Jan 30 15:00:01 crc kubenswrapper[4793]: I0130 15:00:01.605036 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" event={"ID":"1eaa1894-c4b7-4c79-955c-7b713cbe1955","Type":"ContainerStarted","Data":"5e63b95c3d5fe03a218b269dd621485abf1eeaa28d316c45d93b54d2a97ba10d"} Jan 30 15:00:01 crc kubenswrapper[4793]: I0130 15:00:01.622442 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" podStartSLOduration=1.622418827 podStartE2EDuration="1.622418827s" podCreationTimestamp="2026-01-30 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:00:01.618566162 +0000 UTC m=+4612.319914653" watchObservedRunningTime="2026-01-30 15:00:01.622418827 +0000 UTC m=+4612.323767318" Jan 30 15:00:02 crc kubenswrapper[4793]: I0130 15:00:02.613663 4793 generic.go:334] "Generic (PLEG): container finished" podID="1eaa1894-c4b7-4c79-955c-7b713cbe1955" containerID="4237192bc7a1eb44289a5eeb0516108067794976041ba4876322f83681ec69f1" exitCode=0 Jan 30 15:00:02 crc kubenswrapper[4793]: I0130 15:00:02.613905 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" event={"ID":"1eaa1894-c4b7-4c79-955c-7b713cbe1955","Type":"ContainerDied","Data":"4237192bc7a1eb44289a5eeb0516108067794976041ba4876322f83681ec69f1"} Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.053000 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.157836 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eaa1894-c4b7-4c79-955c-7b713cbe1955-config-volume\") pod \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.157941 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eaa1894-c4b7-4c79-955c-7b713cbe1955-secret-volume\") pod \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.158070 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-898h6\" (UniqueName: \"kubernetes.io/projected/1eaa1894-c4b7-4c79-955c-7b713cbe1955-kube-api-access-898h6\") pod \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.158690 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eaa1894-c4b7-4c79-955c-7b713cbe1955-config-volume" (OuterVolumeSpecName: "config-volume") pod "1eaa1894-c4b7-4c79-955c-7b713cbe1955" (UID: "1eaa1894-c4b7-4c79-955c-7b713cbe1955"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.171260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eaa1894-c4b7-4c79-955c-7b713cbe1955-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1eaa1894-c4b7-4c79-955c-7b713cbe1955" (UID: "1eaa1894-c4b7-4c79-955c-7b713cbe1955"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.172306 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eaa1894-c4b7-4c79-955c-7b713cbe1955-kube-api-access-898h6" (OuterVolumeSpecName: "kube-api-access-898h6") pod "1eaa1894-c4b7-4c79-955c-7b713cbe1955" (UID: "1eaa1894-c4b7-4c79-955c-7b713cbe1955"). InnerVolumeSpecName "kube-api-access-898h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.260796 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-898h6\" (UniqueName: \"kubernetes.io/projected/1eaa1894-c4b7-4c79-955c-7b713cbe1955-kube-api-access-898h6\") on node \"crc\" DevicePath \"\"" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.260828 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eaa1894-c4b7-4c79-955c-7b713cbe1955-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.260840 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eaa1894-c4b7-4c79-955c-7b713cbe1955-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.641423 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" event={"ID":"1eaa1894-c4b7-4c79-955c-7b713cbe1955","Type":"ContainerDied","Data":"5e63b95c3d5fe03a218b269dd621485abf1eeaa28d316c45d93b54d2a97ba10d"} Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.641466 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e63b95c3d5fe03a218b269dd621485abf1eeaa28d316c45d93b54d2a97ba10d" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.641523 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.712731 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn"] Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.720809 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn"] Jan 30 15:00:06 crc kubenswrapper[4793]: I0130 15:00:06.411558 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea958b8-aeb8-4696-b604-f1459d6d5608" path="/var/lib/kubelet/pods/dea958b8-aeb8-4696-b604-f1459d6d5608/volumes" Jan 30 15:00:08 crc kubenswrapper[4793]: I0130 15:00:08.399380 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:00:08 crc kubenswrapper[4793]: E0130 15:00:08.399938 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:00:23 crc kubenswrapper[4793]: I0130 15:00:23.398329 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:00:23 crc kubenswrapper[4793]: E0130 15:00:23.400259 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.936185 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xjtpf"] Jan 30 15:00:32 crc kubenswrapper[4793]: E0130 15:00:32.937852 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eaa1894-c4b7-4c79-955c-7b713cbe1955" containerName="collect-profiles" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.937868 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eaa1894-c4b7-4c79-955c-7b713cbe1955" containerName="collect-profiles" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.938124 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eaa1894-c4b7-4c79-955c-7b713cbe1955" containerName="collect-profiles" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.939873 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.955862 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjtpf"] Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.967127 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78vhd\" (UniqueName: \"kubernetes.io/projected/4020bc12-6cb5-4f85-9298-32e7874c7946-kube-api-access-78vhd\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.967210 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-utilities\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.967373 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-catalog-content\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.068923 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78vhd\" (UniqueName: \"kubernetes.io/projected/4020bc12-6cb5-4f85-9298-32e7874c7946-kube-api-access-78vhd\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.069002 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-utilities\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.069159 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-catalog-content\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.069811 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-utilities\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.069979 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-catalog-content\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.088012 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78vhd\" (UniqueName: \"kubernetes.io/projected/4020bc12-6cb5-4f85-9298-32e7874c7946-kube-api-access-78vhd\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.271815 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.799222 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjtpf"] Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.902015 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjtpf" event={"ID":"4020bc12-6cb5-4f85-9298-32e7874c7946","Type":"ContainerStarted","Data":"38b88db308377b3dbfec0ff500616be7f84f028d8a80cd35485f2bde95e3437f"} Jan 30 15:00:34 crc kubenswrapper[4793]: I0130 15:00:34.407168 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:00:34 crc kubenswrapper[4793]: E0130 15:00:34.407758 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:00:34 crc kubenswrapper[4793]: I0130 15:00:34.915673 4793 generic.go:334] "Generic (PLEG): container finished" podID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerID="ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215" exitCode=0 Jan 30 15:00:34 crc kubenswrapper[4793]: I0130 15:00:34.916011 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjtpf" event={"ID":"4020bc12-6cb5-4f85-9298-32e7874c7946","Type":"ContainerDied","Data":"ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215"} Jan 30 15:00:34 crc kubenswrapper[4793]: I0130 15:00:34.920986 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 15:00:36 crc kubenswrapper[4793]: I0130 15:00:36.937998 4793 generic.go:334] "Generic (PLEG): container finished" podID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerID="68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede" exitCode=0 Jan 30 15:00:36 crc kubenswrapper[4793]: I0130 15:00:36.938084 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjtpf" event={"ID":"4020bc12-6cb5-4f85-9298-32e7874c7946","Type":"ContainerDied","Data":"68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede"} Jan 30 15:00:38 crc kubenswrapper[4793]: I0130 15:00:38.959951 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjtpf" event={"ID":"4020bc12-6cb5-4f85-9298-32e7874c7946","Type":"ContainerStarted","Data":"07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1"} Jan 30 15:00:39 crc kubenswrapper[4793]: I0130 15:00:39.000660 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xjtpf" podStartSLOduration=4.030463643 podStartE2EDuration="7.000643803s" podCreationTimestamp="2026-01-30 15:00:32 +0000 UTC" firstStartedPulling="2026-01-30 15:00:34.920752804 +0000 UTC m=+4645.622101295" lastFinishedPulling="2026-01-30 15:00:37.890932964 +0000 UTC m=+4648.592281455" observedRunningTime="2026-01-30 15:00:38.997507736 +0000 UTC m=+4649.698856247" watchObservedRunningTime="2026-01-30 15:00:39.000643803 +0000 UTC m=+4649.701992294" Jan 30 15:00:43 crc kubenswrapper[4793]: I0130 15:00:43.272163 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:43 crc kubenswrapper[4793]: I0130 15:00:43.273639 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:43 crc kubenswrapper[4793]: I0130 15:00:43.321689 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:44 crc kubenswrapper[4793]: I0130 15:00:44.464275 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:44 crc kubenswrapper[4793]: I0130 15:00:44.519971 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjtpf"] Jan 30 15:00:46 crc kubenswrapper[4793]: I0130 15:00:46.022124 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xjtpf" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="registry-server" containerID="cri-o://07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1" gracePeriod=2 Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.017694 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.041561 4793 generic.go:334] "Generic (PLEG): container finished" podID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerID="07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1" exitCode=0 Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.041604 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjtpf" event={"ID":"4020bc12-6cb5-4f85-9298-32e7874c7946","Type":"ContainerDied","Data":"07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1"} Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.041632 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjtpf" event={"ID":"4020bc12-6cb5-4f85-9298-32e7874c7946","Type":"ContainerDied","Data":"38b88db308377b3dbfec0ff500616be7f84f028d8a80cd35485f2bde95e3437f"} Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.041652 4793 scope.go:117] "RemoveContainer" containerID="07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.041671 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.044231 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-catalog-content\") pod \"4020bc12-6cb5-4f85-9298-32e7874c7946\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.074248 4793 scope.go:117] "RemoveContainer" containerID="68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.094321 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4020bc12-6cb5-4f85-9298-32e7874c7946" (UID: "4020bc12-6cb5-4f85-9298-32e7874c7946"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.110179 4793 scope.go:117] "RemoveContainer" containerID="ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.147698 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78vhd\" (UniqueName: \"kubernetes.io/projected/4020bc12-6cb5-4f85-9298-32e7874c7946-kube-api-access-78vhd\") pod \"4020bc12-6cb5-4f85-9298-32e7874c7946\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.147775 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-utilities\") pod \"4020bc12-6cb5-4f85-9298-32e7874c7946\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.148347 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.149291 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-utilities" (OuterVolumeSpecName: "utilities") pod "4020bc12-6cb5-4f85-9298-32e7874c7946" (UID: "4020bc12-6cb5-4f85-9298-32e7874c7946"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.154747 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4020bc12-6cb5-4f85-9298-32e7874c7946-kube-api-access-78vhd" (OuterVolumeSpecName: "kube-api-access-78vhd") pod "4020bc12-6cb5-4f85-9298-32e7874c7946" (UID: "4020bc12-6cb5-4f85-9298-32e7874c7946"). InnerVolumeSpecName "kube-api-access-78vhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.157034 4793 scope.go:117] "RemoveContainer" containerID="07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1" Jan 30 15:00:47 crc kubenswrapper[4793]: E0130 15:00:47.157675 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1\": container with ID starting with 07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1 not found: ID does not exist" containerID="07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.157731 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1"} err="failed to get container status \"07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1\": rpc error: code = NotFound desc = could not find container \"07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1\": container with ID starting with 07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1 not found: ID does not exist" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.157762 4793 scope.go:117] "RemoveContainer" containerID="68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede" Jan 30 15:00:47 crc kubenswrapper[4793]: E0130 15:00:47.158311 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede\": container with ID starting with 68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede not found: ID does not exist" containerID="68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.158413 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede"} err="failed to get container status \"68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede\": rpc error: code = NotFound desc = could not find container \"68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede\": container with ID starting with 68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede not found: ID does not exist" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.158501 4793 scope.go:117] "RemoveContainer" containerID="ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215" Jan 30 15:00:47 crc kubenswrapper[4793]: E0130 15:00:47.158935 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215\": container with ID starting with ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215 not found: ID does not exist" containerID="ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.158971 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215"} err="failed to get container status \"ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215\": rpc error: code = NotFound desc = could not find container \"ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215\": container with ID starting with ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215 not found: ID does not exist" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.249996 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78vhd\" (UniqueName: \"kubernetes.io/projected/4020bc12-6cb5-4f85-9298-32e7874c7946-kube-api-access-78vhd\") on node \"crc\" DevicePath \"\"" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.250315 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.379896 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjtpf"] Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.388469 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjtpf"] Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.398621 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:00:47 crc kubenswrapper[4793]: E0130 15:00:47.398955 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:00:48 crc kubenswrapper[4793]: I0130 15:00:48.409964 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" path="/var/lib/kubelet/pods/4020bc12-6cb5-4f85-9298-32e7874c7946/volumes" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.184661 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-crpmz"] Jan 30 15:00:58 crc kubenswrapper[4793]: E0130 15:00:58.186848 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="extract-content" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.186922 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="extract-content" Jan 30 15:00:58 crc kubenswrapper[4793]: E0130 15:00:58.187019 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="registry-server" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.187176 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="registry-server" Jan 30 15:00:58 crc kubenswrapper[4793]: E0130 15:00:58.187232 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="extract-utilities" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.187278 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="extract-utilities" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.187588 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="registry-server" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.189194 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.196468 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-crpmz"] Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.291203 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-utilities\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.291356 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m7vq\" (UniqueName: \"kubernetes.io/projected/c7abe19e-d694-43f4-b261-cdf9b3e60681-kube-api-access-9m7vq\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.291467 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-catalog-content\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.393752 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m7vq\" (UniqueName: \"kubernetes.io/projected/c7abe19e-d694-43f4-b261-cdf9b3e60681-kube-api-access-9m7vq\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.393828 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-catalog-content\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.393914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-utilities\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.394392 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-utilities\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.394469 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-catalog-content\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.416623 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m7vq\" (UniqueName: \"kubernetes.io/projected/c7abe19e-d694-43f4-b261-cdf9b3e60681-kube-api-access-9m7vq\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.546131 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:59 crc kubenswrapper[4793]: I0130 15:00:59.059304 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-crpmz"] Jan 30 15:00:59 crc kubenswrapper[4793]: I0130 15:00:59.154193 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerStarted","Data":"fb93cb9ff568521eef67dcd73afac2fcad2954cae531e13237dc9dddfdefc166"} Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.047323 4793 scope.go:117] "RemoveContainer" containerID="169c63fb85351a767003e368e147b08afafad5a61c0c77bb947c35a8af5282ae" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.167346 4793 generic.go:334] "Generic (PLEG): container finished" podID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerID="eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c" exitCode=0 Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.167459 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerDied","Data":"eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c"} Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.179346 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496421-n28p5"] Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.181435 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.223761 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496421-n28p5"] Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.226898 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-fernet-keys\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.226939 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlj2\" (UniqueName: \"kubernetes.io/projected/617a2857-c4b0-4558-9834-551a98cd534f-kube-api-access-nqlj2\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.226982 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-config-data\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.227011 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-combined-ca-bundle\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.329339 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-fernet-keys\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.329402 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqlj2\" (UniqueName: \"kubernetes.io/projected/617a2857-c4b0-4558-9834-551a98cd534f-kube-api-access-nqlj2\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.329439 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-config-data\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.329470 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-combined-ca-bundle\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.335790 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-fernet-keys\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.335803 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-combined-ca-bundle\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.341310 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-config-data\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.352486 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqlj2\" (UniqueName: \"kubernetes.io/projected/617a2857-c4b0-4558-9834-551a98cd534f-kube-api-access-nqlj2\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.507940 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:01 crc kubenswrapper[4793]: I0130 15:01:01.096775 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496421-n28p5"] Jan 30 15:01:01 crc kubenswrapper[4793]: W0130 15:01:01.100395 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod617a2857_c4b0_4558_9834_551a98cd534f.slice/crio-22e75b2682355a53c000cf2d7322b5edb68873a2609369389f9d1dd037464337 WatchSource:0}: Error finding container 22e75b2682355a53c000cf2d7322b5edb68873a2609369389f9d1dd037464337: Status 404 returned error can't find the container with id 22e75b2682355a53c000cf2d7322b5edb68873a2609369389f9d1dd037464337 Jan 30 15:01:01 crc kubenswrapper[4793]: I0130 15:01:01.194140 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496421-n28p5" event={"ID":"617a2857-c4b0-4558-9834-551a98cd534f","Type":"ContainerStarted","Data":"22e75b2682355a53c000cf2d7322b5edb68873a2609369389f9d1dd037464337"} Jan 30 15:01:02 crc kubenswrapper[4793]: I0130 15:01:02.206160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerStarted","Data":"98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e"} Jan 30 15:01:02 crc kubenswrapper[4793]: I0130 15:01:02.208026 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496421-n28p5" event={"ID":"617a2857-c4b0-4558-9834-551a98cd534f","Type":"ContainerStarted","Data":"1596cdc010d60aaf0a6cebd1da4a3bfed114acf0f745eba93f905ae48089cb08"} Jan 30 15:01:02 crc kubenswrapper[4793]: I0130 15:01:02.258068 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496421-n28p5" podStartSLOduration=2.258032564 podStartE2EDuration="2.258032564s" podCreationTimestamp="2026-01-30 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:01:02.246897491 +0000 UTC m=+4672.948246002" watchObservedRunningTime="2026-01-30 15:01:02.258032564 +0000 UTC m=+4672.959381055" Jan 30 15:01:02 crc kubenswrapper[4793]: I0130 15:01:02.401076 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:01:02 crc kubenswrapper[4793]: E0130 15:01:02.401364 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:01:13 crc kubenswrapper[4793]: I0130 15:01:13.291653 4793 generic.go:334] "Generic (PLEG): container finished" podID="617a2857-c4b0-4558-9834-551a98cd534f" containerID="1596cdc010d60aaf0a6cebd1da4a3bfed114acf0f745eba93f905ae48089cb08" exitCode=0 Jan 30 15:01:13 crc kubenswrapper[4793]: I0130 15:01:13.291731 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496421-n28p5" event={"ID":"617a2857-c4b0-4558-9834-551a98cd534f","Type":"ContainerDied","Data":"1596cdc010d60aaf0a6cebd1da4a3bfed114acf0f745eba93f905ae48089cb08"} Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.398713 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.746609 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.827615 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-fernet-keys\") pod \"617a2857-c4b0-4558-9834-551a98cd534f\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.827737 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-combined-ca-bundle\") pod \"617a2857-c4b0-4558-9834-551a98cd534f\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.827880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqlj2\" (UniqueName: \"kubernetes.io/projected/617a2857-c4b0-4558-9834-551a98cd534f-kube-api-access-nqlj2\") pod \"617a2857-c4b0-4558-9834-551a98cd534f\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.828072 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-config-data\") pod \"617a2857-c4b0-4558-9834-551a98cd534f\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.844889 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617a2857-c4b0-4558-9834-551a98cd534f-kube-api-access-nqlj2" (OuterVolumeSpecName: "kube-api-access-nqlj2") pod "617a2857-c4b0-4558-9834-551a98cd534f" (UID: "617a2857-c4b0-4558-9834-551a98cd534f"). InnerVolumeSpecName "kube-api-access-nqlj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.851145 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "617a2857-c4b0-4558-9834-551a98cd534f" (UID: "617a2857-c4b0-4558-9834-551a98cd534f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.924257 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "617a2857-c4b0-4558-9834-551a98cd534f" (UID: "617a2857-c4b0-4558-9834-551a98cd534f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.932672 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.932711 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqlj2\" (UniqueName: \"kubernetes.io/projected/617a2857-c4b0-4558-9834-551a98cd534f-kube-api-access-nqlj2\") on node \"crc\" DevicePath \"\"" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.932727 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.989201 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-config-data" (OuterVolumeSpecName: "config-data") pod "617a2857-c4b0-4558-9834-551a98cd534f" (UID: "617a2857-c4b0-4558-9834-551a98cd534f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:01:15 crc kubenswrapper[4793]: I0130 15:01:15.034360 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 15:01:15 crc kubenswrapper[4793]: I0130 15:01:15.310213 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"c38987640cf280e4c02e580e84a0e7564fa5243ab30c792c5125d7350150b8b0"} Jan 30 15:01:15 crc kubenswrapper[4793]: I0130 15:01:15.313303 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496421-n28p5" event={"ID":"617a2857-c4b0-4558-9834-551a98cd534f","Type":"ContainerDied","Data":"22e75b2682355a53c000cf2d7322b5edb68873a2609369389f9d1dd037464337"} Jan 30 15:01:15 crc kubenswrapper[4793]: I0130 15:01:15.313347 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e75b2682355a53c000cf2d7322b5edb68873a2609369389f9d1dd037464337" Jan 30 15:01:15 crc kubenswrapper[4793]: I0130 15:01:15.313408 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:17 crc kubenswrapper[4793]: I0130 15:01:17.332569 4793 generic.go:334] "Generic (PLEG): container finished" podID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerID="98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e" exitCode=0 Jan 30 15:01:17 crc kubenswrapper[4793]: I0130 15:01:17.332632 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerDied","Data":"98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e"} Jan 30 15:01:19 crc kubenswrapper[4793]: I0130 15:01:19.351999 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerStarted","Data":"1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88"} Jan 30 15:01:19 crc kubenswrapper[4793]: I0130 15:01:19.377976 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-crpmz" podStartSLOduration=3.516454064 podStartE2EDuration="21.377954867s" podCreationTimestamp="2026-01-30 15:00:58 +0000 UTC" firstStartedPulling="2026-01-30 15:01:00.210973105 +0000 UTC m=+4670.912321586" lastFinishedPulling="2026-01-30 15:01:18.072473898 +0000 UTC m=+4688.773822389" observedRunningTime="2026-01-30 15:01:19.375182329 +0000 UTC m=+4690.076530820" watchObservedRunningTime="2026-01-30 15:01:19.377954867 +0000 UTC m=+4690.079303358" Jan 30 15:01:28 crc kubenswrapper[4793]: I0130 15:01:28.546432 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:01:28 crc kubenswrapper[4793]: I0130 15:01:28.546949 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:01:29 crc kubenswrapper[4793]: I0130 15:01:29.603935 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-crpmz" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" probeResult="failure" output=< Jan 30 15:01:29 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:01:29 crc kubenswrapper[4793]: > Jan 30 15:01:39 crc kubenswrapper[4793]: I0130 15:01:39.594748 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-crpmz" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" probeResult="failure" output=< Jan 30 15:01:39 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:01:39 crc kubenswrapper[4793]: > Jan 30 15:01:49 crc kubenswrapper[4793]: I0130 15:01:49.599881 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-crpmz" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" probeResult="failure" output=< Jan 30 15:01:49 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:01:49 crc kubenswrapper[4793]: > Jan 30 15:01:58 crc kubenswrapper[4793]: I0130 15:01:58.686013 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:01:58 crc kubenswrapper[4793]: I0130 15:01:58.745199 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:01:59 crc kubenswrapper[4793]: I0130 15:01:59.402854 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-crpmz"] Jan 30 15:01:59 crc kubenswrapper[4793]: I0130 15:01:59.717643 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-crpmz" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" containerID="cri-o://1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88" gracePeriod=2 Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.439801 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.580456 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-catalog-content\") pod \"c7abe19e-d694-43f4-b261-cdf9b3e60681\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.580588 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-utilities\") pod \"c7abe19e-d694-43f4-b261-cdf9b3e60681\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.580733 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m7vq\" (UniqueName: \"kubernetes.io/projected/c7abe19e-d694-43f4-b261-cdf9b3e60681-kube-api-access-9m7vq\") pod \"c7abe19e-d694-43f4-b261-cdf9b3e60681\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.581484 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-utilities" (OuterVolumeSpecName: "utilities") pod "c7abe19e-d694-43f4-b261-cdf9b3e60681" (UID: "c7abe19e-d694-43f4-b261-cdf9b3e60681"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.582520 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.598326 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7abe19e-d694-43f4-b261-cdf9b3e60681-kube-api-access-9m7vq" (OuterVolumeSpecName: "kube-api-access-9m7vq") pod "c7abe19e-d694-43f4-b261-cdf9b3e60681" (UID: "c7abe19e-d694-43f4-b261-cdf9b3e60681"). InnerVolumeSpecName "kube-api-access-9m7vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.684783 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m7vq\" (UniqueName: \"kubernetes.io/projected/c7abe19e-d694-43f4-b261-cdf9b3e60681-kube-api-access-9m7vq\") on node \"crc\" DevicePath \"\"" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.710406 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7abe19e-d694-43f4-b261-cdf9b3e60681" (UID: "c7abe19e-d694-43f4-b261-cdf9b3e60681"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.728264 4793 generic.go:334] "Generic (PLEG): container finished" podID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerID="1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88" exitCode=0 Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.728309 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerDied","Data":"1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88"} Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.728332 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.728358 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerDied","Data":"fb93cb9ff568521eef67dcd73afac2fcad2954cae531e13237dc9dddfdefc166"} Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.728379 4793 scope.go:117] "RemoveContainer" containerID="1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.748591 4793 scope.go:117] "RemoveContainer" containerID="98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.767738 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-crpmz"] Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.775867 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-crpmz"] Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.778483 4793 scope.go:117] "RemoveContainer" containerID="eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.786331 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.825174 4793 scope.go:117] "RemoveContainer" containerID="1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88" Jan 30 15:02:00 crc kubenswrapper[4793]: E0130 15:02:00.825618 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88\": container with ID starting with 1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88 not found: ID does not exist" containerID="1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.825658 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88"} err="failed to get container status \"1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88\": rpc error: code = NotFound desc = could not find container \"1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88\": container with ID starting with 1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88 not found: ID does not exist" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.825683 4793 scope.go:117] "RemoveContainer" containerID="98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e" Jan 30 15:02:00 crc kubenswrapper[4793]: E0130 15:02:00.827611 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e\": container with ID starting with 98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e not found: ID does not exist" containerID="98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.827687 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e"} err="failed to get container status \"98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e\": rpc error: code = NotFound desc = could not find container \"98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e\": container with ID starting with 98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e not found: ID does not exist" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.827705 4793 scope.go:117] "RemoveContainer" containerID="eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c" Jan 30 15:02:00 crc kubenswrapper[4793]: E0130 15:02:00.828116 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c\": container with ID starting with eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c not found: ID does not exist" containerID="eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.828149 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c"} err="failed to get container status \"eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c\": rpc error: code = NotFound desc = could not find container \"eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c\": container with ID starting with eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c not found: ID does not exist" Jan 30 15:02:02 crc kubenswrapper[4793]: I0130 15:02:02.410412 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" path="/var/lib/kubelet/pods/c7abe19e-d694-43f4-b261-cdf9b3e60681/volumes" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.381811 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cdw7k"] Jan 30 15:02:08 crc kubenswrapper[4793]: E0130 15:02:08.382668 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="extract-utilities" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.382683 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="extract-utilities" Jan 30 15:02:08 crc kubenswrapper[4793]: E0130 15:02:08.382710 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617a2857-c4b0-4558-9834-551a98cd534f" containerName="keystone-cron" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.382717 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="617a2857-c4b0-4558-9834-551a98cd534f" containerName="keystone-cron" Jan 30 15:02:08 crc kubenswrapper[4793]: E0130 15:02:08.382727 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.382733 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" Jan 30 15:02:08 crc kubenswrapper[4793]: E0130 15:02:08.382751 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="extract-content" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.382757 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="extract-content" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.382926 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="617a2857-c4b0-4558-9834-551a98cd534f" containerName="keystone-cron" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.382949 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.384500 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.426989 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdw7k"] Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.437018 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-utilities\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.437091 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-catalog-content\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.437188 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hng6j\" (UniqueName: \"kubernetes.io/projected/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-kube-api-access-hng6j\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.538656 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hng6j\" (UniqueName: \"kubernetes.io/projected/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-kube-api-access-hng6j\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.539026 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-utilities\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.539188 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-catalog-content\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.539978 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-catalog-content\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.540581 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-utilities\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.829601 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hng6j\" (UniqueName: \"kubernetes.io/projected/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-kube-api-access-hng6j\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:09 crc kubenswrapper[4793]: I0130 15:02:09.024343 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:09 crc kubenswrapper[4793]: I0130 15:02:09.533551 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdw7k"] Jan 30 15:02:09 crc kubenswrapper[4793]: W0130 15:02:09.548949 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5a5ddb3_fbef_4413_a123_81ea7ce9adf7.slice/crio-f0fef69f2593a16b3c150f12e6d51f43a217cede57521b9a80d8fb238f2b3341 WatchSource:0}: Error finding container f0fef69f2593a16b3c150f12e6d51f43a217cede57521b9a80d8fb238f2b3341: Status 404 returned error can't find the container with id f0fef69f2593a16b3c150f12e6d51f43a217cede57521b9a80d8fb238f2b3341 Jan 30 15:02:09 crc kubenswrapper[4793]: I0130 15:02:09.806018 4793 generic.go:334] "Generic (PLEG): container finished" podID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerID="9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71" exitCode=0 Jan 30 15:02:09 crc kubenswrapper[4793]: I0130 15:02:09.806294 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerDied","Data":"9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71"} Jan 30 15:02:09 crc kubenswrapper[4793]: I0130 15:02:09.806320 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerStarted","Data":"f0fef69f2593a16b3c150f12e6d51f43a217cede57521b9a80d8fb238f2b3341"} Jan 30 15:02:12 crc kubenswrapper[4793]: I0130 15:02:12.847521 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerStarted","Data":"17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736"} Jan 30 15:02:14 crc kubenswrapper[4793]: I0130 15:02:14.871496 4793 generic.go:334] "Generic (PLEG): container finished" podID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerID="17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736" exitCode=0 Jan 30 15:02:14 crc kubenswrapper[4793]: I0130 15:02:14.872067 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerDied","Data":"17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736"} Jan 30 15:02:16 crc kubenswrapper[4793]: I0130 15:02:16.892467 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerStarted","Data":"6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3"} Jan 30 15:02:16 crc kubenswrapper[4793]: I0130 15:02:16.917516 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cdw7k" podStartSLOduration=3.939188581 podStartE2EDuration="8.917496046s" podCreationTimestamp="2026-01-30 15:02:08 +0000 UTC" firstStartedPulling="2026-01-30 15:02:10.818266458 +0000 UTC m=+4741.519614949" lastFinishedPulling="2026-01-30 15:02:15.796573923 +0000 UTC m=+4746.497922414" observedRunningTime="2026-01-30 15:02:16.912732159 +0000 UTC m=+4747.614080660" watchObservedRunningTime="2026-01-30 15:02:16.917496046 +0000 UTC m=+4747.618844537" Jan 30 15:02:19 crc kubenswrapper[4793]: I0130 15:02:19.024574 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:19 crc kubenswrapper[4793]: I0130 15:02:19.024881 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:19 crc kubenswrapper[4793]: I0130 15:02:19.074067 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:29 crc kubenswrapper[4793]: I0130 15:02:29.075806 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:29 crc kubenswrapper[4793]: I0130 15:02:29.121611 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdw7k"] Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.012733 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cdw7k" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="registry-server" containerID="cri-o://6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3" gracePeriod=2 Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.600399 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.707639 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-catalog-content\") pod \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.707797 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-utilities\") pod \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.707842 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hng6j\" (UniqueName: \"kubernetes.io/projected/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-kube-api-access-hng6j\") pod \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.708765 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-utilities" (OuterVolumeSpecName: "utilities") pod "c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" (UID: "c5a5ddb3-fbef-4413-a123-81ea7ce9adf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.713592 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-kube-api-access-hng6j" (OuterVolumeSpecName: "kube-api-access-hng6j") pod "c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" (UID: "c5a5ddb3-fbef-4413-a123-81ea7ce9adf7"). InnerVolumeSpecName "kube-api-access-hng6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.757231 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" (UID: "c5a5ddb3-fbef-4413-a123-81ea7ce9adf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.810473 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.810511 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.810522 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hng6j\" (UniqueName: \"kubernetes.io/projected/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-kube-api-access-hng6j\") on node \"crc\" DevicePath \"\"" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.026635 4793 generic.go:334] "Generic (PLEG): container finished" podID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerID="6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3" exitCode=0 Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.026677 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerDied","Data":"6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3"} Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.026699 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.027192 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerDied","Data":"f0fef69f2593a16b3c150f12e6d51f43a217cede57521b9a80d8fb238f2b3341"} Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.027207 4793 scope.go:117] "RemoveContainer" containerID="6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.058099 4793 scope.go:117] "RemoveContainer" containerID="17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.075173 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdw7k"] Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.085592 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cdw7k"] Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.451467 4793 scope.go:117] "RemoveContainer" containerID="9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.516033 4793 scope.go:117] "RemoveContainer" containerID="6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3" Jan 30 15:02:31 crc kubenswrapper[4793]: E0130 15:02:31.516529 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3\": container with ID starting with 6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3 not found: ID does not exist" containerID="6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.516569 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3"} err="failed to get container status \"6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3\": rpc error: code = NotFound desc = could not find container \"6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3\": container with ID starting with 6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3 not found: ID does not exist" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.516591 4793 scope.go:117] "RemoveContainer" containerID="17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736" Jan 30 15:02:31 crc kubenswrapper[4793]: E0130 15:02:31.517002 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736\": container with ID starting with 17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736 not found: ID does not exist" containerID="17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.517142 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736"} err="failed to get container status \"17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736\": rpc error: code = NotFound desc = could not find container \"17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736\": container with ID starting with 17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736 not found: ID does not exist" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.517241 4793 scope.go:117] "RemoveContainer" containerID="9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71" Jan 30 15:02:31 crc kubenswrapper[4793]: E0130 15:02:31.517617 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71\": container with ID starting with 9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71 not found: ID does not exist" containerID="9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.517642 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71"} err="failed to get container status \"9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71\": rpc error: code = NotFound desc = could not find container \"9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71\": container with ID starting with 9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71 not found: ID does not exist" Jan 30 15:02:32 crc kubenswrapper[4793]: I0130 15:02:32.428125 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" path="/var/lib/kubelet/pods/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7/volumes" Jan 30 15:03:42 crc kubenswrapper[4793]: I0130 15:03:42.414081 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:03:42 crc kubenswrapper[4793]: I0130 15:03:42.414718 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:04:12 crc kubenswrapper[4793]: I0130 15:04:12.413701 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:04:12 crc kubenswrapper[4793]: I0130 15:04:12.414464 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:04:42 crc kubenswrapper[4793]: I0130 15:04:42.413343 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:04:42 crc kubenswrapper[4793]: I0130 15:04:42.413911 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:04:42 crc kubenswrapper[4793]: I0130 15:04:42.413961 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 15:04:42 crc kubenswrapper[4793]: I0130 15:04:42.414793 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c38987640cf280e4c02e580e84a0e7564fa5243ab30c792c5125d7350150b8b0"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 15:04:42 crc kubenswrapper[4793]: I0130 15:04:42.414853 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://c38987640cf280e4c02e580e84a0e7564fa5243ab30c792c5125d7350150b8b0" gracePeriod=600 Jan 30 15:04:43 crc kubenswrapper[4793]: I0130 15:04:43.241544 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="c38987640cf280e4c02e580e84a0e7564fa5243ab30c792c5125d7350150b8b0" exitCode=0 Jan 30 15:04:43 crc kubenswrapper[4793]: I0130 15:04:43.241634 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"c38987640cf280e4c02e580e84a0e7564fa5243ab30c792c5125d7350150b8b0"} Jan 30 15:04:43 crc kubenswrapper[4793]: I0130 15:04:43.241944 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:04:44 crc kubenswrapper[4793]: I0130 15:04:44.257333 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71"} Jan 30 15:07:12 crc kubenswrapper[4793]: I0130 15:07:12.413206 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:07:12 crc kubenswrapper[4793]: I0130 15:07:12.413788 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.836006 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wrdg9"] Jan 30 15:07:14 crc kubenswrapper[4793]: E0130 15:07:14.836559 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="extract-content" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.836577 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="extract-content" Jan 30 15:07:14 crc kubenswrapper[4793]: E0130 15:07:14.836601 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="extract-utilities" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.836609 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="extract-utilities" Jan 30 15:07:14 crc kubenswrapper[4793]: E0130 15:07:14.836642 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="registry-server" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.836654 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="registry-server" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.836898 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="registry-server" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.838764 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.860028 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wrdg9"] Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.926211 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-utilities\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.926570 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4z6n\" (UniqueName: \"kubernetes.io/projected/eaa7e68a-f5c8-4492-b539-96fff099748d-kube-api-access-l4z6n\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.926687 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-catalog-content\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.029085 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4z6n\" (UniqueName: \"kubernetes.io/projected/eaa7e68a-f5c8-4492-b539-96fff099748d-kube-api-access-l4z6n\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.029158 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-catalog-content\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.029223 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-utilities\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.029753 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-utilities\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.030395 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-catalog-content\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.051799 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4z6n\" (UniqueName: \"kubernetes.io/projected/eaa7e68a-f5c8-4492-b539-96fff099748d-kube-api-access-l4z6n\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.157800 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.811888 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wrdg9"] Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.999066 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerStarted","Data":"1a35d3069eab5b35988d959a7b47b8631a96e9d363d3e40d680c3b80be285bba"} Jan 30 15:07:17 crc kubenswrapper[4793]: I0130 15:07:17.009707 4793 generic.go:334] "Generic (PLEG): container finished" podID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerID="2e68fc094c6474084a00ace7a1343c3281487ac0b42f6c0f86c4ce491d8395ce" exitCode=0 Jan 30 15:07:17 crc kubenswrapper[4793]: I0130 15:07:17.009756 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerDied","Data":"2e68fc094c6474084a00ace7a1343c3281487ac0b42f6c0f86c4ce491d8395ce"} Jan 30 15:07:17 crc kubenswrapper[4793]: I0130 15:07:17.011972 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 15:07:20 crc kubenswrapper[4793]: I0130 15:07:20.043951 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerStarted","Data":"4ac9e4de050e07af6f6a3d4ab7b9515ece2210c422a53f0f5e0a00047769d72b"} Jan 30 15:07:23 crc kubenswrapper[4793]: I0130 15:07:23.076764 4793 generic.go:334] "Generic (PLEG): container finished" podID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerID="4ac9e4de050e07af6f6a3d4ab7b9515ece2210c422a53f0f5e0a00047769d72b" exitCode=0 Jan 30 15:07:23 crc kubenswrapper[4793]: I0130 15:07:23.076834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerDied","Data":"4ac9e4de050e07af6f6a3d4ab7b9515ece2210c422a53f0f5e0a00047769d72b"} Jan 30 15:07:28 crc kubenswrapper[4793]: I0130 15:07:28.126840 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerStarted","Data":"d6973b535c9ecb060763fdccd1de889c01aef82d5985f11c0ff82c0869318f33"} Jan 30 15:07:28 crc kubenswrapper[4793]: I0130 15:07:28.155146 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wrdg9" podStartSLOduration=3.954083299 podStartE2EDuration="14.155126487s" podCreationTimestamp="2026-01-30 15:07:14 +0000 UTC" firstStartedPulling="2026-01-30 15:07:17.01166545 +0000 UTC m=+5047.713013941" lastFinishedPulling="2026-01-30 15:07:27.212708638 +0000 UTC m=+5057.914057129" observedRunningTime="2026-01-30 15:07:28.149458887 +0000 UTC m=+5058.850807398" watchObservedRunningTime="2026-01-30 15:07:28.155126487 +0000 UTC m=+5058.856474998" Jan 30 15:07:35 crc kubenswrapper[4793]: I0130 15:07:35.159888 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:35 crc kubenswrapper[4793]: I0130 15:07:35.160406 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:35 crc kubenswrapper[4793]: I0130 15:07:35.212452 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:35 crc kubenswrapper[4793]: I0130 15:07:35.302930 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:35 crc kubenswrapper[4793]: I0130 15:07:35.447064 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wrdg9"] Jan 30 15:07:37 crc kubenswrapper[4793]: I0130 15:07:37.271592 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wrdg9" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="registry-server" containerID="cri-o://d6973b535c9ecb060763fdccd1de889c01aef82d5985f11c0ff82c0869318f33" gracePeriod=2 Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.286454 4793 generic.go:334] "Generic (PLEG): container finished" podID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerID="d6973b535c9ecb060763fdccd1de889c01aef82d5985f11c0ff82c0869318f33" exitCode=0 Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.286647 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerDied","Data":"d6973b535c9ecb060763fdccd1de889c01aef82d5985f11c0ff82c0869318f33"} Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.288363 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerDied","Data":"1a35d3069eab5b35988d959a7b47b8631a96e9d363d3e40d680c3b80be285bba"} Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.288398 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a35d3069eab5b35988d959a7b47b8631a96e9d363d3e40d680c3b80be285bba" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.344889 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.442583 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-catalog-content\") pod \"eaa7e68a-f5c8-4492-b539-96fff099748d\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.442636 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4z6n\" (UniqueName: \"kubernetes.io/projected/eaa7e68a-f5c8-4492-b539-96fff099748d-kube-api-access-l4z6n\") pod \"eaa7e68a-f5c8-4492-b539-96fff099748d\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.442677 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-utilities\") pod \"eaa7e68a-f5c8-4492-b539-96fff099748d\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.443637 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-utilities" (OuterVolumeSpecName: "utilities") pod "eaa7e68a-f5c8-4492-b539-96fff099748d" (UID: "eaa7e68a-f5c8-4492-b539-96fff099748d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.454436 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa7e68a-f5c8-4492-b539-96fff099748d-kube-api-access-l4z6n" (OuterVolumeSpecName: "kube-api-access-l4z6n") pod "eaa7e68a-f5c8-4492-b539-96fff099748d" (UID: "eaa7e68a-f5c8-4492-b539-96fff099748d"). InnerVolumeSpecName "kube-api-access-l4z6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.499413 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eaa7e68a-f5c8-4492-b539-96fff099748d" (UID: "eaa7e68a-f5c8-4492-b539-96fff099748d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.544860 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.544912 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4z6n\" (UniqueName: \"kubernetes.io/projected/eaa7e68a-f5c8-4492-b539-96fff099748d-kube-api-access-l4z6n\") on node \"crc\" DevicePath \"\"" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.544927 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:07:39 crc kubenswrapper[4793]: I0130 15:07:39.295164 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:39 crc kubenswrapper[4793]: I0130 15:07:39.327588 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wrdg9"] Jan 30 15:07:39 crc kubenswrapper[4793]: I0130 15:07:39.335661 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wrdg9"] Jan 30 15:07:40 crc kubenswrapper[4793]: I0130 15:07:40.410443 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" path="/var/lib/kubelet/pods/eaa7e68a-f5c8-4492-b539-96fff099748d/volumes" Jan 30 15:07:42 crc kubenswrapper[4793]: I0130 15:07:42.413994 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:07:42 crc kubenswrapper[4793]: I0130 15:07:42.414321 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.414131 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.414667 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.414709 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.415465 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.415513 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" gracePeriod=600 Jan 30 15:08:12 crc kubenswrapper[4793]: E0130 15:08:12.538822 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.608320 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" exitCode=0 Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.608415 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71"} Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.608709 4793 scope.go:117] "RemoveContainer" containerID="c38987640cf280e4c02e580e84a0e7564fa5243ab30c792c5125d7350150b8b0" Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.609662 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:08:12 crc kubenswrapper[4793]: E0130 15:08:12.610141 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:08:27 crc kubenswrapper[4793]: I0130 15:08:27.398748 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:08:27 crc kubenswrapper[4793]: E0130 15:08:27.399456 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:08:42 crc kubenswrapper[4793]: I0130 15:08:42.399093 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:08:42 crc kubenswrapper[4793]: E0130 15:08:42.399844 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:08:53 crc kubenswrapper[4793]: I0130 15:08:53.397921 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:08:53 crc kubenswrapper[4793]: E0130 15:08:53.399765 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:09:03 crc kubenswrapper[4793]: I0130 15:09:03.106670 4793 generic.go:334] "Generic (PLEG): container finished" podID="4bf53e2d-d024-4526-ada2-0ee6b461babb" containerID="d89fe0491771c7c6f955e91e1925c9e0d02dd442783163c9438dbd9b02ce47d9" exitCode=0 Jan 30 15:09:03 crc kubenswrapper[4793]: I0130 15:09:03.106791 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4bf53e2d-d024-4526-ada2-0ee6b461babb","Type":"ContainerDied","Data":"d89fe0491771c7c6f955e91e1925c9e0d02dd442783163c9438dbd9b02ce47d9"} Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.465466 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.533609 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ssh-key\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.534772 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-temporary\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.534945 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-579bt\" (UniqueName: \"kubernetes.io/projected/4bf53e2d-d024-4526-ada2-0ee6b461babb-kube-api-access-579bt\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.535066 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ca-certs\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.535157 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config-secret\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.535268 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-workdir\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.535346 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.535410 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-config-data\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.535550 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.540111 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.540779 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-config-data" (OuterVolumeSpecName: "config-data") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.541571 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.542720 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.547228 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf53e2d-d024-4526-ada2-0ee6b461babb-kube-api-access-579bt" (OuterVolumeSpecName: "kube-api-access-579bt") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "kube-api-access-579bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.592665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.601758 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.622067 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.626969 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.637158 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-579bt\" (UniqueName: \"kubernetes.io/projected/4bf53e2d-d024-4526-ada2-0ee6b461babb-kube-api-access-579bt\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.637277 4793 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.637340 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.637397 4793 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.637450 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.637499 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.639695 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.639777 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.639831 4793 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.668285 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.741584 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:05 crc kubenswrapper[4793]: I0130 15:09:05.124322 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4bf53e2d-d024-4526-ada2-0ee6b461babb","Type":"ContainerDied","Data":"55c6a2b8062403d0e3d82dc5615fa6326ff29a1fce4fe5257e5d197c6f2071cb"} Jan 30 15:09:05 crc kubenswrapper[4793]: I0130 15:09:05.124407 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 15:09:05 crc kubenswrapper[4793]: I0130 15:09:05.124413 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55c6a2b8062403d0e3d82dc5615fa6326ff29a1fce4fe5257e5d197c6f2071cb" Jan 30 15:09:06 crc kubenswrapper[4793]: I0130 15:09:06.398298 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:09:06 crc kubenswrapper[4793]: E0130 15:09:06.398836 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.378280 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 15:09:08 crc kubenswrapper[4793]: E0130 15:09:08.378914 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="extract-content" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.378925 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="extract-content" Jan 30 15:09:08 crc kubenswrapper[4793]: E0130 15:09:08.378945 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="registry-server" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.378951 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="registry-server" Jan 30 15:09:08 crc kubenswrapper[4793]: E0130 15:09:08.378972 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf53e2d-d024-4526-ada2-0ee6b461babb" containerName="tempest-tests-tempest-tests-runner" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.378978 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf53e2d-d024-4526-ada2-0ee6b461babb" containerName="tempest-tests-tempest-tests-runner" Jan 30 15:09:08 crc kubenswrapper[4793]: E0130 15:09:08.378989 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="extract-utilities" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.378995 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="extract-utilities" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.379175 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf53e2d-d024-4526-ada2-0ee6b461babb" containerName="tempest-tests-tempest-tests-runner" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.379206 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="registry-server" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.379754 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.383037 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9sb9w" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.394219 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.518337 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.518429 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9vjt\" (UniqueName: \"kubernetes.io/projected/8de9d25e-7ca7-4338-a64e-ed95f7bd9de9-kube-api-access-q9vjt\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.620353 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9vjt\" (UniqueName: \"kubernetes.io/projected/8de9d25e-7ca7-4338-a64e-ed95f7bd9de9-kube-api-access-q9vjt\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.620559 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.621619 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.728831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9vjt\" (UniqueName: \"kubernetes.io/projected/8de9d25e-7ca7-4338-a64e-ed95f7bd9de9-kube-api-access-q9vjt\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.754725 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:09 crc kubenswrapper[4793]: I0130 15:09:09.003125 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:09 crc kubenswrapper[4793]: I0130 15:09:09.458927 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 15:09:10 crc kubenswrapper[4793]: I0130 15:09:10.167763 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9","Type":"ContainerStarted","Data":"53a2b61bee4c7b8809c505a69704f25fbea86304433e8ac7ac5e69b5e4937279"} Jan 30 15:09:11 crc kubenswrapper[4793]: I0130 15:09:11.178006 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9","Type":"ContainerStarted","Data":"c96fca4660b587eb60d3db2372a00d54e6b15e06f8daa20132280faca28efaed"} Jan 30 15:09:11 crc kubenswrapper[4793]: I0130 15:09:11.195310 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.9574385859999999 podStartE2EDuration="3.195292473s" podCreationTimestamp="2026-01-30 15:09:08 +0000 UTC" firstStartedPulling="2026-01-30 15:09:09.478961989 +0000 UTC m=+5160.180310480" lastFinishedPulling="2026-01-30 15:09:10.716815876 +0000 UTC m=+5161.418164367" observedRunningTime="2026-01-30 15:09:11.194774051 +0000 UTC m=+5161.896122562" watchObservedRunningTime="2026-01-30 15:09:11.195292473 +0000 UTC m=+5161.896640984" Jan 30 15:09:18 crc kubenswrapper[4793]: I0130 15:09:18.398626 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:09:18 crc kubenswrapper[4793]: E0130 15:09:18.399307 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:09:33 crc kubenswrapper[4793]: I0130 15:09:33.398971 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:09:33 crc kubenswrapper[4793]: E0130 15:09:33.401034 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.205444 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jg6df/must-gather-x5n45"] Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.214154 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.216936 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jg6df"/"kube-root-ca.crt" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.217185 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jg6df"/"default-dockercfg-lqjtp" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.217259 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jg6df"/"openshift-service-ca.crt" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.353302 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvqx6\" (UniqueName: \"kubernetes.io/projected/9cdbb05e-d475-48b2-9b59-297532883826-kube-api-access-nvqx6\") pod \"must-gather-x5n45\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.353638 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9cdbb05e-d475-48b2-9b59-297532883826-must-gather-output\") pod \"must-gather-x5n45\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.392390 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jg6df/must-gather-x5n45"] Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.455253 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvqx6\" (UniqueName: \"kubernetes.io/projected/9cdbb05e-d475-48b2-9b59-297532883826-kube-api-access-nvqx6\") pod \"must-gather-x5n45\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.455348 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9cdbb05e-d475-48b2-9b59-297532883826-must-gather-output\") pod \"must-gather-x5n45\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.455922 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9cdbb05e-d475-48b2-9b59-297532883826-must-gather-output\") pod \"must-gather-x5n45\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.485780 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvqx6\" (UniqueName: \"kubernetes.io/projected/9cdbb05e-d475-48b2-9b59-297532883826-kube-api-access-nvqx6\") pod \"must-gather-x5n45\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.544857 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:35 crc kubenswrapper[4793]: I0130 15:09:35.062144 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jg6df/must-gather-x5n45"] Jan 30 15:09:35 crc kubenswrapper[4793]: W0130 15:09:35.064320 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cdbb05e_d475_48b2_9b59_297532883826.slice/crio-92990bc991275785f1929ebbaa37c8f3adafb18828a0999f23e0277513cd18fe WatchSource:0}: Error finding container 92990bc991275785f1929ebbaa37c8f3adafb18828a0999f23e0277513cd18fe: Status 404 returned error can't find the container with id 92990bc991275785f1929ebbaa37c8f3adafb18828a0999f23e0277513cd18fe Jan 30 15:09:35 crc kubenswrapper[4793]: I0130 15:09:35.399163 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/must-gather-x5n45" event={"ID":"9cdbb05e-d475-48b2-9b59-297532883826","Type":"ContainerStarted","Data":"92990bc991275785f1929ebbaa37c8f3adafb18828a0999f23e0277513cd18fe"} Jan 30 15:09:45 crc kubenswrapper[4793]: I0130 15:09:45.398900 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:09:45 crc kubenswrapper[4793]: E0130 15:09:45.404317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:09:49 crc kubenswrapper[4793]: I0130 15:09:49.546094 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/must-gather-x5n45" event={"ID":"9cdbb05e-d475-48b2-9b59-297532883826","Type":"ContainerStarted","Data":"ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf"} Jan 30 15:09:52 crc kubenswrapper[4793]: I0130 15:09:52.575849 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/must-gather-x5n45" event={"ID":"9cdbb05e-d475-48b2-9b59-297532883826","Type":"ContainerStarted","Data":"4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b"} Jan 30 15:09:52 crc kubenswrapper[4793]: I0130 15:09:52.599735 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jg6df/must-gather-x5n45" podStartSLOduration=4.531392122 podStartE2EDuration="18.599718094s" podCreationTimestamp="2026-01-30 15:09:34 +0000 UTC" firstStartedPulling="2026-01-30 15:09:35.066237271 +0000 UTC m=+5185.767585762" lastFinishedPulling="2026-01-30 15:09:49.134563243 +0000 UTC m=+5199.835911734" observedRunningTime="2026-01-30 15:09:52.590185759 +0000 UTC m=+5203.291534260" watchObservedRunningTime="2026-01-30 15:09:52.599718094 +0000 UTC m=+5203.301066585" Jan 30 15:09:56 crc kubenswrapper[4793]: E0130 15:09:56.606858 4793 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.2:51862->38.102.83.2:36591: write tcp 38.102.83.2:51862->38.102.83.2:36591: write: broken pipe Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.658414 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jg6df/crc-debug-2g87h"] Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.659970 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.734512 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87c6n\" (UniqueName: \"kubernetes.io/projected/e91a73e1-11d2-483f-b279-af21dd483350-kube-api-access-87c6n\") pod \"crc-debug-2g87h\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.734758 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e91a73e1-11d2-483f-b279-af21dd483350-host\") pod \"crc-debug-2g87h\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.836763 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87c6n\" (UniqueName: \"kubernetes.io/projected/e91a73e1-11d2-483f-b279-af21dd483350-kube-api-access-87c6n\") pod \"crc-debug-2g87h\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.836867 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e91a73e1-11d2-483f-b279-af21dd483350-host\") pod \"crc-debug-2g87h\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.836962 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e91a73e1-11d2-483f-b279-af21dd483350-host\") pod \"crc-debug-2g87h\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.860831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87c6n\" (UniqueName: \"kubernetes.io/projected/e91a73e1-11d2-483f-b279-af21dd483350-kube-api-access-87c6n\") pod \"crc-debug-2g87h\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.978769 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:58 crc kubenswrapper[4793]: I0130 15:09:58.639268 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-2g87h" event={"ID":"e91a73e1-11d2-483f-b279-af21dd483350","Type":"ContainerStarted","Data":"4781b9ae2f920b71223792d335627b59adabaf76e90902cdd7e6c060633fa2cf"} Jan 30 15:09:59 crc kubenswrapper[4793]: I0130 15:09:59.398067 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:09:59 crc kubenswrapper[4793]: E0130 15:09:59.398579 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:10:10 crc kubenswrapper[4793]: I0130 15:10:10.410267 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:10:10 crc kubenswrapper[4793]: E0130 15:10:10.410912 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:10:11 crc kubenswrapper[4793]: I0130 15:10:11.786514 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-2g87h" event={"ID":"e91a73e1-11d2-483f-b279-af21dd483350","Type":"ContainerStarted","Data":"cc41eecc94295c98eb3214210729f1c635aad07b9ddd5ced865321fef6013a0f"} Jan 30 15:10:11 crc kubenswrapper[4793]: I0130 15:10:11.812010 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jg6df/crc-debug-2g87h" podStartSLOduration=1.555893357 podStartE2EDuration="14.811988976s" podCreationTimestamp="2026-01-30 15:09:57 +0000 UTC" firstStartedPulling="2026-01-30 15:09:58.039084824 +0000 UTC m=+5208.740433315" lastFinishedPulling="2026-01-30 15:10:11.295180443 +0000 UTC m=+5221.996528934" observedRunningTime="2026-01-30 15:10:11.800226515 +0000 UTC m=+5222.501575006" watchObservedRunningTime="2026-01-30 15:10:11.811988976 +0000 UTC m=+5222.513337467" Jan 30 15:10:21 crc kubenswrapper[4793]: I0130 15:10:21.398311 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:10:21 crc kubenswrapper[4793]: E0130 15:10:21.399128 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:10:32 crc kubenswrapper[4793]: I0130 15:10:32.401183 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:10:32 crc kubenswrapper[4793]: E0130 15:10:32.401952 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:10:44 crc kubenswrapper[4793]: I0130 15:10:44.398786 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:10:44 crc kubenswrapper[4793]: E0130 15:10:44.399752 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:10:58 crc kubenswrapper[4793]: I0130 15:10:58.398159 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:10:58 crc kubenswrapper[4793]: E0130 15:10:58.398728 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:11:02 crc kubenswrapper[4793]: I0130 15:11:02.990835 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gxqkt"] Jan 30 15:11:02 crc kubenswrapper[4793]: I0130 15:11:02.994945 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.010734 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxqkt"] Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.169305 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9smv\" (UniqueName: \"kubernetes.io/projected/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-kube-api-access-b9smv\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.169349 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-catalog-content\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.169547 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-utilities\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.271605 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9smv\" (UniqueName: \"kubernetes.io/projected/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-kube-api-access-b9smv\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.271690 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-catalog-content\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.272314 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-catalog-content\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.273123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-utilities\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.274858 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-utilities\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.299937 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9smv\" (UniqueName: \"kubernetes.io/projected/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-kube-api-access-b9smv\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.376153 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:04 crc kubenswrapper[4793]: I0130 15:11:04.052947 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxqkt"] Jan 30 15:11:04 crc kubenswrapper[4793]: I0130 15:11:04.246270 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerStarted","Data":"d2d8096fc57f1afae2693dd57e7e3fe427947ad7e4989e5dfdc716dfe95f9ff9"} Jan 30 15:11:05 crc kubenswrapper[4793]: I0130 15:11:05.256441 4793 generic.go:334] "Generic (PLEG): container finished" podID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerID="5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb" exitCode=0 Jan 30 15:11:05 crc kubenswrapper[4793]: I0130 15:11:05.256601 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerDied","Data":"5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb"} Jan 30 15:11:06 crc kubenswrapper[4793]: I0130 15:11:06.266835 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerStarted","Data":"839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be"} Jan 30 15:11:08 crc kubenswrapper[4793]: I0130 15:11:08.286019 4793 generic.go:334] "Generic (PLEG): container finished" podID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerID="839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be" exitCode=0 Jan 30 15:11:08 crc kubenswrapper[4793]: I0130 15:11:08.286117 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerDied","Data":"839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be"} Jan 30 15:11:09 crc kubenswrapper[4793]: I0130 15:11:09.313159 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerStarted","Data":"cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a"} Jan 30 15:11:09 crc kubenswrapper[4793]: I0130 15:11:09.344016 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gxqkt" podStartSLOduration=3.8274411219999998 podStartE2EDuration="7.34399046s" podCreationTimestamp="2026-01-30 15:11:02 +0000 UTC" firstStartedPulling="2026-01-30 15:11:05.258637429 +0000 UTC m=+5275.959985920" lastFinishedPulling="2026-01-30 15:11:08.775186767 +0000 UTC m=+5279.476535258" observedRunningTime="2026-01-30 15:11:09.336483464 +0000 UTC m=+5280.037831965" watchObservedRunningTime="2026-01-30 15:11:09.34399046 +0000 UTC m=+5280.045338941" Jan 30 15:11:11 crc kubenswrapper[4793]: I0130 15:11:11.398852 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:11:11 crc kubenswrapper[4793]: E0130 15:11:11.399373 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:11:13 crc kubenswrapper[4793]: I0130 15:11:13.380753 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:13 crc kubenswrapper[4793]: I0130 15:11:13.381037 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:14 crc kubenswrapper[4793]: I0130 15:11:14.430446 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gxqkt" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="registry-server" probeResult="failure" output=< Jan 30 15:11:14 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:11:14 crc kubenswrapper[4793]: > Jan 30 15:11:15 crc kubenswrapper[4793]: I0130 15:11:15.367208 4793 generic.go:334] "Generic (PLEG): container finished" podID="e91a73e1-11d2-483f-b279-af21dd483350" containerID="cc41eecc94295c98eb3214210729f1c635aad07b9ddd5ced865321fef6013a0f" exitCode=0 Jan 30 15:11:15 crc kubenswrapper[4793]: I0130 15:11:15.367312 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-2g87h" event={"ID":"e91a73e1-11d2-483f-b279-af21dd483350","Type":"ContainerDied","Data":"cc41eecc94295c98eb3214210729f1c635aad07b9ddd5ced865321fef6013a0f"} Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.500157 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.540212 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jg6df/crc-debug-2g87h"] Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.549523 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jg6df/crc-debug-2g87h"] Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.610610 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87c6n\" (UniqueName: \"kubernetes.io/projected/e91a73e1-11d2-483f-b279-af21dd483350-kube-api-access-87c6n\") pod \"e91a73e1-11d2-483f-b279-af21dd483350\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.610697 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e91a73e1-11d2-483f-b279-af21dd483350-host\") pod \"e91a73e1-11d2-483f-b279-af21dd483350\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.611122 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e91a73e1-11d2-483f-b279-af21dd483350-host" (OuterVolumeSpecName: "host") pod "e91a73e1-11d2-483f-b279-af21dd483350" (UID: "e91a73e1-11d2-483f-b279-af21dd483350"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.630191 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e91a73e1-11d2-483f-b279-af21dd483350-kube-api-access-87c6n" (OuterVolumeSpecName: "kube-api-access-87c6n") pod "e91a73e1-11d2-483f-b279-af21dd483350" (UID: "e91a73e1-11d2-483f-b279-af21dd483350"). InnerVolumeSpecName "kube-api-access-87c6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.713567 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87c6n\" (UniqueName: \"kubernetes.io/projected/e91a73e1-11d2-483f-b279-af21dd483350-kube-api-access-87c6n\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.713842 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e91a73e1-11d2-483f-b279-af21dd483350-host\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:17 crc kubenswrapper[4793]: I0130 15:11:17.384195 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4781b9ae2f920b71223792d335627b59adabaf76e90902cdd7e6c060633fa2cf" Jan 30 15:11:17 crc kubenswrapper[4793]: I0130 15:11:17.384273 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.416632 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e91a73e1-11d2-483f-b279-af21dd483350" path="/var/lib/kubelet/pods/e91a73e1-11d2-483f-b279-af21dd483350/volumes" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.626821 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jg6df/crc-debug-948q6"] Jan 30 15:11:18 crc kubenswrapper[4793]: E0130 15:11:18.627326 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91a73e1-11d2-483f-b279-af21dd483350" containerName="container-00" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.627349 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91a73e1-11d2-483f-b279-af21dd483350" containerName="container-00" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.627598 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e91a73e1-11d2-483f-b279-af21dd483350" containerName="container-00" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.628429 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.764986 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9819c60-4bee-4eaf-87a4-481aef7f40ba-host\") pod \"crc-debug-948q6\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.765293 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvjvq\" (UniqueName: \"kubernetes.io/projected/c9819c60-4bee-4eaf-87a4-481aef7f40ba-kube-api-access-qvjvq\") pod \"crc-debug-948q6\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.867451 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9819c60-4bee-4eaf-87a4-481aef7f40ba-host\") pod \"crc-debug-948q6\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.867516 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvjvq\" (UniqueName: \"kubernetes.io/projected/c9819c60-4bee-4eaf-87a4-481aef7f40ba-kube-api-access-qvjvq\") pod \"crc-debug-948q6\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.867786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9819c60-4bee-4eaf-87a4-481aef7f40ba-host\") pod \"crc-debug-948q6\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.894803 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvjvq\" (UniqueName: \"kubernetes.io/projected/c9819c60-4bee-4eaf-87a4-481aef7f40ba-kube-api-access-qvjvq\") pod \"crc-debug-948q6\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.947280 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:19 crc kubenswrapper[4793]: I0130 15:11:19.404237 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-948q6" event={"ID":"c9819c60-4bee-4eaf-87a4-481aef7f40ba","Type":"ContainerStarted","Data":"01a90db7be859ecafd810b8a07f2c26755a5394c65fea220431985c5bdccb2d5"} Jan 30 15:11:20 crc kubenswrapper[4793]: I0130 15:11:20.412946 4793 generic.go:334] "Generic (PLEG): container finished" podID="c9819c60-4bee-4eaf-87a4-481aef7f40ba" containerID="568ed0e82f10baad26d3430efb936eb0714fc3fed75c7084e20ef051683db5ff" exitCode=0 Jan 30 15:11:20 crc kubenswrapper[4793]: I0130 15:11:20.413037 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-948q6" event={"ID":"c9819c60-4bee-4eaf-87a4-481aef7f40ba","Type":"ContainerDied","Data":"568ed0e82f10baad26d3430efb936eb0714fc3fed75c7084e20ef051683db5ff"} Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.524764 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.621615 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvjvq\" (UniqueName: \"kubernetes.io/projected/c9819c60-4bee-4eaf-87a4-481aef7f40ba-kube-api-access-qvjvq\") pod \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.621755 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9819c60-4bee-4eaf-87a4-481aef7f40ba-host\") pod \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.621875 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9819c60-4bee-4eaf-87a4-481aef7f40ba-host" (OuterVolumeSpecName: "host") pod "c9819c60-4bee-4eaf-87a4-481aef7f40ba" (UID: "c9819c60-4bee-4eaf-87a4-481aef7f40ba"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.622253 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9819c60-4bee-4eaf-87a4-481aef7f40ba-host\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.641475 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9819c60-4bee-4eaf-87a4-481aef7f40ba-kube-api-access-qvjvq" (OuterVolumeSpecName: "kube-api-access-qvjvq") pod "c9819c60-4bee-4eaf-87a4-481aef7f40ba" (UID: "c9819c60-4bee-4eaf-87a4-481aef7f40ba"). InnerVolumeSpecName "kube-api-access-qvjvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.723491 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvjvq\" (UniqueName: \"kubernetes.io/projected/c9819c60-4bee-4eaf-87a4-481aef7f40ba-kube-api-access-qvjvq\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:22 crc kubenswrapper[4793]: I0130 15:11:22.435453 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-948q6" event={"ID":"c9819c60-4bee-4eaf-87a4-481aef7f40ba","Type":"ContainerDied","Data":"01a90db7be859ecafd810b8a07f2c26755a5394c65fea220431985c5bdccb2d5"} Jan 30 15:11:22 crc kubenswrapper[4793]: I0130 15:11:22.435773 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01a90db7be859ecafd810b8a07f2c26755a5394c65fea220431985c5bdccb2d5" Jan 30 15:11:22 crc kubenswrapper[4793]: I0130 15:11:22.435561 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:22 crc kubenswrapper[4793]: I0130 15:11:22.458874 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jg6df/crc-debug-948q6"] Jan 30 15:11:22 crc kubenswrapper[4793]: I0130 15:11:22.471797 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jg6df/crc-debug-948q6"] Jan 30 15:11:23 crc kubenswrapper[4793]: I0130 15:11:23.448554 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:23 crc kubenswrapper[4793]: I0130 15:11:23.520440 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:23 crc kubenswrapper[4793]: I0130 15:11:23.687247 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxqkt"] Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.000044 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jg6df/crc-debug-zxkmb"] Jan 30 15:11:24 crc kubenswrapper[4793]: E0130 15:11:24.000575 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9819c60-4bee-4eaf-87a4-481aef7f40ba" containerName="container-00" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.000599 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9819c60-4bee-4eaf-87a4-481aef7f40ba" containerName="container-00" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.000799 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9819c60-4bee-4eaf-87a4-481aef7f40ba" containerName="container-00" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.001522 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.072252 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52e7a23-6edb-43d6-9726-23c6796194b1-host\") pod \"crc-debug-zxkmb\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.072872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khntt\" (UniqueName: \"kubernetes.io/projected/b52e7a23-6edb-43d6-9726-23c6796194b1-kube-api-access-khntt\") pod \"crc-debug-zxkmb\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.175084 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khntt\" (UniqueName: \"kubernetes.io/projected/b52e7a23-6edb-43d6-9726-23c6796194b1-kube-api-access-khntt\") pod \"crc-debug-zxkmb\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.175158 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52e7a23-6edb-43d6-9726-23c6796194b1-host\") pod \"crc-debug-zxkmb\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.175334 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52e7a23-6edb-43d6-9726-23c6796194b1-host\") pod \"crc-debug-zxkmb\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.205426 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khntt\" (UniqueName: \"kubernetes.io/projected/b52e7a23-6edb-43d6-9726-23c6796194b1-kube-api-access-khntt\") pod \"crc-debug-zxkmb\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.320713 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.398769 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:11:24 crc kubenswrapper[4793]: E0130 15:11:24.399232 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.407832 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9819c60-4bee-4eaf-87a4-481aef7f40ba" path="/var/lib/kubelet/pods/c9819c60-4bee-4eaf-87a4-481aef7f40ba/volumes" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.454143 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-zxkmb" event={"ID":"b52e7a23-6edb-43d6-9726-23c6796194b1","Type":"ContainerStarted","Data":"f9effb3adf3233c9f76a3e2b64981a874f3a34cd1f5b88e2d7a0cc3eb50c85fd"} Jan 30 15:11:25 crc kubenswrapper[4793]: I0130 15:11:25.465012 4793 generic.go:334] "Generic (PLEG): container finished" podID="b52e7a23-6edb-43d6-9726-23c6796194b1" containerID="c72a517fa26537db3ff3b91d8b7910984b9b712d451f95ae207c6331a56c555b" exitCode=0 Jan 30 15:11:25 crc kubenswrapper[4793]: I0130 15:11:25.465095 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-zxkmb" event={"ID":"b52e7a23-6edb-43d6-9726-23c6796194b1","Type":"ContainerDied","Data":"c72a517fa26537db3ff3b91d8b7910984b9b712d451f95ae207c6331a56c555b"} Jan 30 15:11:25 crc kubenswrapper[4793]: I0130 15:11:25.465304 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gxqkt" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="registry-server" containerID="cri-o://cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a" gracePeriod=2 Jan 30 15:11:25 crc kubenswrapper[4793]: I0130 15:11:25.522619 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jg6df/crc-debug-zxkmb"] Jan 30 15:11:25 crc kubenswrapper[4793]: I0130 15:11:25.533650 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jg6df/crc-debug-zxkmb"] Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.029381 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.110113 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-catalog-content\") pod \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.110607 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-utilities\") pod \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.110821 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9smv\" (UniqueName: \"kubernetes.io/projected/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-kube-api-access-b9smv\") pod \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.111149 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-utilities" (OuterVolumeSpecName: "utilities") pod "efa561ef-e4d7-4893-bec0-ff16ee72f7b8" (UID: "efa561ef-e4d7-4893-bec0-ff16ee72f7b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.113175 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.126788 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-kube-api-access-b9smv" (OuterVolumeSpecName: "kube-api-access-b9smv") pod "efa561ef-e4d7-4893-bec0-ff16ee72f7b8" (UID: "efa561ef-e4d7-4893-bec0-ff16ee72f7b8"). InnerVolumeSpecName "kube-api-access-b9smv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.148009 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efa561ef-e4d7-4893-bec0-ff16ee72f7b8" (UID: "efa561ef-e4d7-4893-bec0-ff16ee72f7b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.215751 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.215790 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9smv\" (UniqueName: \"kubernetes.io/projected/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-kube-api-access-b9smv\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.475410 4793 generic.go:334] "Generic (PLEG): container finished" podID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerID="cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a" exitCode=0 Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.475471 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.475481 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerDied","Data":"cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a"} Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.475836 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerDied","Data":"d2d8096fc57f1afae2693dd57e7e3fe427947ad7e4989e5dfdc716dfe95f9ff9"} Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.475860 4793 scope.go:117] "RemoveContainer" containerID="cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.548899 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.549246 4793 scope.go:117] "RemoveContainer" containerID="839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.561965 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxqkt"] Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.568714 4793 scope.go:117] "RemoveContainer" containerID="5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.576841 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxqkt"] Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.622010 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khntt\" (UniqueName: \"kubernetes.io/projected/b52e7a23-6edb-43d6-9726-23c6796194b1-kube-api-access-khntt\") pod \"b52e7a23-6edb-43d6-9726-23c6796194b1\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.622138 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52e7a23-6edb-43d6-9726-23c6796194b1-host\") pod \"b52e7a23-6edb-43d6-9726-23c6796194b1\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.622706 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b52e7a23-6edb-43d6-9726-23c6796194b1-host" (OuterVolumeSpecName: "host") pod "b52e7a23-6edb-43d6-9726-23c6796194b1" (UID: "b52e7a23-6edb-43d6-9726-23c6796194b1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.626731 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52e7a23-6edb-43d6-9726-23c6796194b1-kube-api-access-khntt" (OuterVolumeSpecName: "kube-api-access-khntt") pod "b52e7a23-6edb-43d6-9726-23c6796194b1" (UID: "b52e7a23-6edb-43d6-9726-23c6796194b1"). InnerVolumeSpecName "kube-api-access-khntt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.630940 4793 scope.go:117] "RemoveContainer" containerID="cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a" Jan 30 15:11:26 crc kubenswrapper[4793]: E0130 15:11:26.632219 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a\": container with ID starting with cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a not found: ID does not exist" containerID="cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.632354 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a"} err="failed to get container status \"cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a\": rpc error: code = NotFound desc = could not find container \"cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a\": container with ID starting with cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a not found: ID does not exist" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.632462 4793 scope.go:117] "RemoveContainer" containerID="839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be" Jan 30 15:11:26 crc kubenswrapper[4793]: E0130 15:11:26.633642 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be\": container with ID starting with 839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be not found: ID does not exist" containerID="839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.633707 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be"} err="failed to get container status \"839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be\": rpc error: code = NotFound desc = could not find container \"839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be\": container with ID starting with 839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be not found: ID does not exist" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.633745 4793 scope.go:117] "RemoveContainer" containerID="5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb" Jan 30 15:11:26 crc kubenswrapper[4793]: E0130 15:11:26.634225 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb\": container with ID starting with 5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb not found: ID does not exist" containerID="5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.634345 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb"} err="failed to get container status \"5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb\": rpc error: code = NotFound desc = could not find container \"5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb\": container with ID starting with 5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb not found: ID does not exist" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.724826 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khntt\" (UniqueName: \"kubernetes.io/projected/b52e7a23-6edb-43d6-9726-23c6796194b1-kube-api-access-khntt\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.724857 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52e7a23-6edb-43d6-9726-23c6796194b1-host\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:27 crc kubenswrapper[4793]: I0130 15:11:27.491206 4793 scope.go:117] "RemoveContainer" containerID="c72a517fa26537db3ff3b91d8b7910984b9b712d451f95ae207c6331a56c555b" Jan 30 15:11:27 crc kubenswrapper[4793]: I0130 15:11:27.491256 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:28 crc kubenswrapper[4793]: I0130 15:11:28.408655 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52e7a23-6edb-43d6-9726-23c6796194b1" path="/var/lib/kubelet/pods/b52e7a23-6edb-43d6-9726-23c6796194b1/volumes" Jan 30 15:11:28 crc kubenswrapper[4793]: I0130 15:11:28.409454 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" path="/var/lib/kubelet/pods/efa561ef-e4d7-4893-bec0-ff16ee72f7b8/volumes" Jan 30 15:11:38 crc kubenswrapper[4793]: I0130 15:11:38.397780 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:11:38 crc kubenswrapper[4793]: E0130 15:11:38.398772 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:11:47 crc kubenswrapper[4793]: I0130 15:11:47.760708 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-577797dd7d-dhrt2_a389d76c-e0de-4b8d-84b2-82aedd050f7f/barbican-api/0.log" Jan 30 15:11:47 crc kubenswrapper[4793]: I0130 15:11:47.910204 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-577797dd7d-dhrt2_a389d76c-e0de-4b8d-84b2-82aedd050f7f/barbican-api-log/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.019680 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6dd7f7f8-htnvl_af929740-592b-4d7f-9c99-061df6882206/barbican-keystone-listener/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.042222 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6dd7f7f8-htnvl_af929740-592b-4d7f-9c99-061df6882206/barbican-keystone-listener-log/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.249007 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d78d76787-7f5jh_653cedf2-2880-49ff-b177-8974b9f0ecdf/barbican-worker/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.327419 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d78d76787-7f5jh_653cedf2-2880-49ff-b177-8974b9f0ecdf/barbican-worker-log/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.492609 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6_2ba6b544-0042-43d7-abe9-bc40439f804b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.643723 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/ceilometer-notification-agent/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.655884 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/ceilometer-central-agent/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.778636 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/sg-core/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.791378 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/proxy-httpd/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.018620 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3105dc9e-c178-4799-a658-044d4d9b8312/cinder-api/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.042947 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3105dc9e-c178-4799-a658-044d4d9b8312/cinder-api-log/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.208537 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_83e26b73-5483-4b6c-88cd-5d794f14ef5a/cinder-scheduler/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.325492 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_83e26b73-5483-4b6c-88cd-5d794f14ef5a/probe/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.369292 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc_260f1ea9-6ba5-40aa-ab56-e95237cb1009/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.398540 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:11:49 crc kubenswrapper[4793]: E0130 15:11:49.398845 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.576744 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-5bm62_b3e8eb28-c303-409b-a89b-b273b2f56fff/init/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.665307 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-jchk2_44f4e8fd-4511-4670-944a-e37dfc6238c8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.985213 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-5bm62_b3e8eb28-c303-409b-a89b-b273b2f56fff/init/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.088727 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qgztn_f1632f4b-e0e5-4069-a77b-ae4f1911869b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.172293 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-5bm62_b3e8eb28-c303-409b-a89b-b273b2f56fff/dnsmasq-dns/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.272547 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ae7d1df8-4b0f-46f7-85f4-e24fd65a919d/glance-httpd/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.340688 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ae7d1df8-4b0f-46f7-85f4-e24fd65a919d/glance-log/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.659958 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f96d1ae8-18a5-4651-b460-21e9ddb50684/glance-log/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.665781 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f96d1ae8-18a5-4651-b460-21e9ddb50684/glance-httpd/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.870712 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9fc5f8f6-nj7xv_7c37d49c-cbd6-47d6-8f29-51ec6fac2f61/horizon/2.log" Jan 30 15:11:51 crc kubenswrapper[4793]: I0130 15:11:51.101983 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9fc5f8f6-nj7xv_7c37d49c-cbd6-47d6-8f29-51ec6fac2f61/horizon/1.log" Jan 30 15:11:51 crc kubenswrapper[4793]: I0130 15:11:51.217950 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp_ae4f8964-b104-43bb-8356-bb53a9635527/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:51 crc kubenswrapper[4793]: I0130 15:11:51.446077 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9fc5f8f6-nj7xv_7c37d49c-cbd6-47d6-8f29-51ec6fac2f61/horizon-log/0.log" Jan 30 15:11:51 crc kubenswrapper[4793]: I0130 15:11:51.691951 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496421-n28p5_617a2857-c4b0-4558-9834-551a98cd534f/keystone-cron/0.log" Jan 30 15:11:51 crc kubenswrapper[4793]: I0130 15:11:51.751674 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lqrxr_1ee9c552-088f-4e61-961e-7062bf6e874b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:52 crc kubenswrapper[4793]: I0130 15:11:52.001079 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a3625667-be35-4d81-84f9-e00593f1c627/kube-state-metrics/0.log" Jan 30 15:11:52 crc kubenswrapper[4793]: I0130 15:11:52.297218 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2_96926233-9ce4-4a0b-bab4-d0c4fa90389b/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:52 crc kubenswrapper[4793]: I0130 15:11:52.441062 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-d689db86f-zslsz_0ed57c3d-4992-4cfa-8655-1587b5897df6/keystone-api/0.log" Jan 30 15:11:53 crc kubenswrapper[4793]: I0130 15:11:53.229799 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-668ffd44cc-lhns4_d9f34138-4dce-415b-ad20-cf0ba588f012/neutron-httpd/0.log" Jan 30 15:11:53 crc kubenswrapper[4793]: I0130 15:11:53.248230 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk_92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:53 crc kubenswrapper[4793]: I0130 15:11:53.532530 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-668ffd44cc-lhns4_d9f34138-4dce-415b-ad20-cf0ba588f012/neutron-api/0.log" Jan 30 15:11:54 crc kubenswrapper[4793]: I0130 15:11:54.103017 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7/nova-cell0-conductor-conductor/0.log" Jan 30 15:11:54 crc kubenswrapper[4793]: I0130 15:11:54.472022 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d2acd609-26c0-4b98-861f-a8b12fcd07bf/nova-cell1-conductor-conductor/0.log" Jan 30 15:11:54 crc kubenswrapper[4793]: I0130 15:11:54.801641 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_abaabb74-42dd-40b6-9cb7-69db46f235df/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 15:11:54 crc kubenswrapper[4793]: I0130 15:11:54.958865 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b4991f7-e6e6-4dfd-a75b-25a7506591e1/nova-api-log/0.log" Jan 30 15:11:55 crc kubenswrapper[4793]: I0130 15:11:55.113922 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-sk8t8_dfc4d2ba-0414-4f1e-8733-a75d39218ef8/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:55 crc kubenswrapper[4793]: I0130 15:11:55.314338 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_02223b96-2b8b-4d32-b7ba-9cb517e03f13/nova-metadata-log/0.log" Jan 30 15:11:55 crc kubenswrapper[4793]: I0130 15:11:55.457519 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b4991f7-e6e6-4dfd-a75b-25a7506591e1/nova-api-api/0.log" Jan 30 15:11:55 crc kubenswrapper[4793]: I0130 15:11:55.682782 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41e0025f-6abc-4554-b7a0-c132607aec86/mysql-bootstrap/0.log" Jan 30 15:11:55 crc kubenswrapper[4793]: I0130 15:11:55.988862 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41e0025f-6abc-4554-b7a0-c132607aec86/mysql-bootstrap/0.log" Jan 30 15:11:56 crc kubenswrapper[4793]: I0130 15:11:56.003225 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41e0025f-6abc-4554-b7a0-c132607aec86/galera/0.log" Jan 30 15:11:56 crc kubenswrapper[4793]: I0130 15:11:56.112932 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9e04e820-112a-4afa-b908-f9b8be3e9e7c/nova-scheduler-scheduler/0.log" Jan 30 15:11:56 crc kubenswrapper[4793]: I0130 15:11:56.352664 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f45b0069-4cb7-4dfd-ac2d-1473cacbde1f/mysql-bootstrap/0.log" Jan 30 15:11:56 crc kubenswrapper[4793]: I0130 15:11:56.673137 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f45b0069-4cb7-4dfd-ac2d-1473cacbde1f/galera/0.log" Jan 30 15:11:56 crc kubenswrapper[4793]: I0130 15:11:56.684450 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f45b0069-4cb7-4dfd-ac2d-1473cacbde1f/mysql-bootstrap/0.log" Jan 30 15:11:56 crc kubenswrapper[4793]: I0130 15:11:56.944547 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7/openstackclient/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.110242 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-45fd5_230700ff-5087-4d0d-9d93-90b597d2ef72/ovn-controller/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.151594 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vx7z5_2eaf3033-e5f4-48bc-bdee-b7d97e57e765/openstack-network-exporter/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.395244 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_02223b96-2b8b-4d32-b7ba-9cb517e03f13/nova-metadata-metadata/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.543677 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovsdb-server-init/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.796695 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovsdb-server-init/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.806555 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovsdb-server/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.850806 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovs-vswitchd/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.111851 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-45sz7_dbd66148-cdd0-4e92-9601-3ef1576a5d3f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.140622 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_270527bd-015e-4904-8916-07993e081611/openstack-network-exporter/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.244803 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_270527bd-015e-4904-8916-07993e081611/ovn-northd/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.607346 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_89e99d15-97ad-4ac5-ba68-82ef88460222/memcached/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.668183 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bfa8998b-ee3a-4aea-80e8-c59620a5308a/openstack-network-exporter/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.678341 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hsmd9"] Jan 30 15:11:58 crc kubenswrapper[4793]: E0130 15:11:58.678738 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="registry-server" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.678754 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="registry-server" Jan 30 15:11:58 crc kubenswrapper[4793]: E0130 15:11:58.678776 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="extract-utilities" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.678784 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="extract-utilities" Jan 30 15:11:58 crc kubenswrapper[4793]: E0130 15:11:58.678798 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52e7a23-6edb-43d6-9726-23c6796194b1" containerName="container-00" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.678804 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52e7a23-6edb-43d6-9726-23c6796194b1" containerName="container-00" Jan 30 15:11:58 crc kubenswrapper[4793]: E0130 15:11:58.678812 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="extract-content" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.678817 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="extract-content" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.678993 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52e7a23-6edb-43d6-9726-23c6796194b1" containerName="container-00" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.679007 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="registry-server" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.680184 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.704741 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hsmd9"] Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.755671 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bfa8998b-ee3a-4aea-80e8-c59620a5308a/ovsdbserver-nb/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.776297 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-catalog-content\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.776548 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp5gr\" (UniqueName: \"kubernetes.io/projected/e3db2a3d-671e-4af9-8758-032ec6169132-kube-api-access-fp5gr\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.776851 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-utilities\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.879396 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-catalog-content\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.879451 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp5gr\" (UniqueName: \"kubernetes.io/projected/e3db2a3d-671e-4af9-8758-032ec6169132-kube-api-access-fp5gr\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.879507 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-utilities\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.880136 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-utilities\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.880129 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-catalog-content\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.897255 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp5gr\" (UniqueName: \"kubernetes.io/projected/e3db2a3d-671e-4af9-8758-032ec6169132-kube-api-access-fp5gr\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.981198 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_285be7d6-1f03-43af-8087-46ba257183ec/ovsdbserver-sb/0.log" Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.013263 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.052159 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_285be7d6-1f03-43af-8087-46ba257183ec/openstack-network-exporter/0.log" Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.450594 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65f95549b8-wtpxl_57bfc822-1d30-49bc-a077-686b68e9c1e6/placement-api/0.log" Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.518561 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65f95549b8-wtpxl_57bfc822-1d30-49bc-a077-686b68e9c1e6/placement-log/0.log" Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.547537 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hsmd9"] Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.673796 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b0247ba-adfd-4195-bf23-91478001fed7/setup-container/0.log" Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.771068 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerStarted","Data":"18b8805c99c2d22576ab45c0c54990056672997e71533374fa339804e56b3512"} Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.937649 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b0247ba-adfd-4195-bf23-91478001fed7/setup-container/0.log" Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.048945 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b0247ba-adfd-4195-bf23-91478001fed7/rabbitmq/0.log" Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.123064 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7ffc0461-9589-45f5-a656-85cc01de58ed/setup-container/0.log" Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.497933 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7ffc0461-9589-45f5-a656-85cc01de58ed/rabbitmq/0.log" Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.596066 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7_0538b501-a861-4302-b26e-f5cfb17ed62a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.756737 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7ffc0461-9589-45f5-a656-85cc01de58ed/setup-container/0.log" Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.780680 4793 generic.go:334] "Generic (PLEG): container finished" podID="e3db2a3d-671e-4af9-8758-032ec6169132" containerID="06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf" exitCode=0 Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.780712 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerDied","Data":"06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf"} Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.930862 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-t7bl5_b89c70f6-dabd-4984-8f21-235a9ab2f307/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.027283 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8_03127c65-edbf-41bd-9543-35ae0eddbff6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.153962 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-j5q58_7915ec77-ca16-4f23-a367-42b525c80284/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.398868 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:12:01 crc kubenswrapper[4793]: E0130 15:12:01.399179 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.465255 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-nlncv_3cad1dbc-effe-48d8-af45-df0a45e16783/ssh-known-hosts-edpm-deployment/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.474861 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7767cf976c-8m6hn_de3851c3-345e-41a1-ad9e-ee3f4e357d85/proxy-server/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.767598 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-auditor/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.795622 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-q459t_50011731-846f-4e86-8664-f9c797dc64ed/swift-ring-rebalance/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.822471 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7767cf976c-8m6hn_de3851c3-345e-41a1-ad9e-ee3f4e357d85/proxy-httpd/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.888424 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-reaper/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.999668 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-replicator/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.052590 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-server/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.060147 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-replicator/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.083572 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-auditor/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.171590 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-server/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.246734 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-updater/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.324882 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-auditor/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.336244 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-expirer/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.393283 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-replicator/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.460370 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-server/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.516950 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-updater/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.618865 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/swift-recon-cron/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.621400 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/rsync/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.790452 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb_8b1317e1-63f1-4b06-aa31-5df5459c6ce6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.800378 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerStarted","Data":"a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5"} Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.958475 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_4bf53e2d-d024-4526-ada2-0ee6b461babb/tempest-tests-tempest-tests-runner/0.log" Jan 30 15:12:03 crc kubenswrapper[4793]: I0130 15:12:03.019709 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8de9d25e-7ca7-4338-a64e-ed95f7bd9de9/test-operator-logs-container/0.log" Jan 30 15:12:03 crc kubenswrapper[4793]: I0130 15:12:03.142442 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt_dcc6f491-d722-48e4-bcb8-8a9de7603786/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:12:12 crc kubenswrapper[4793]: I0130 15:12:12.884953 4793 generic.go:334] "Generic (PLEG): container finished" podID="e3db2a3d-671e-4af9-8758-032ec6169132" containerID="a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5" exitCode=0 Jan 30 15:12:12 crc kubenswrapper[4793]: I0130 15:12:12.885009 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerDied","Data":"a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5"} Jan 30 15:12:13 crc kubenswrapper[4793]: I0130 15:12:13.398837 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:12:13 crc kubenswrapper[4793]: E0130 15:12:13.399228 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:12:13 crc kubenswrapper[4793]: I0130 15:12:13.897568 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerStarted","Data":"4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6"} Jan 30 15:12:13 crc kubenswrapper[4793]: I0130 15:12:13.915318 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hsmd9" podStartSLOduration=3.132562285 podStartE2EDuration="15.915300143s" podCreationTimestamp="2026-01-30 15:11:58 +0000 UTC" firstStartedPulling="2026-01-30 15:12:00.782023758 +0000 UTC m=+5331.483372249" lastFinishedPulling="2026-01-30 15:12:13.564761626 +0000 UTC m=+5344.266110107" observedRunningTime="2026-01-30 15:12:13.912205266 +0000 UTC m=+5344.613553767" watchObservedRunningTime="2026-01-30 15:12:13.915300143 +0000 UTC m=+5344.616648634" Jan 30 15:12:19 crc kubenswrapper[4793]: I0130 15:12:19.013440 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:12:19 crc kubenswrapper[4793]: I0130 15:12:19.014879 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:12:20 crc kubenswrapper[4793]: I0130 15:12:20.063565 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hsmd9" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="registry-server" probeResult="failure" output=< Jan 30 15:12:20 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:12:20 crc kubenswrapper[4793]: > Jan 30 15:12:26 crc kubenswrapper[4793]: I0130 15:12:26.398193 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:12:26 crc kubenswrapper[4793]: E0130 15:12:26.399037 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:12:29 crc kubenswrapper[4793]: I0130 15:12:29.066022 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:12:29 crc kubenswrapper[4793]: I0130 15:12:29.120909 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:12:30 crc kubenswrapper[4793]: I0130 15:12:30.288887 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hsmd9"] Jan 30 15:12:30 crc kubenswrapper[4793]: I0130 15:12:30.426827 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-8bg6c_ec981da4-a3ba-4e4e-a0eb-2168ab79fe77/manager/0.log" Jan 30 15:12:30 crc kubenswrapper[4793]: I0130 15:12:30.587619 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/util/0.log" Jan 30 15:12:30 crc kubenswrapper[4793]: I0130 15:12:30.785872 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/util/0.log" Jan 30 15:12:30 crc kubenswrapper[4793]: I0130 15:12:30.822285 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/pull/0.log" Jan 30 15:12:30 crc kubenswrapper[4793]: I0130 15:12:30.842995 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/pull/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.046627 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hsmd9" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="registry-server" containerID="cri-o://4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6" gracePeriod=2 Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.066024 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/util/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.125662 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/extract/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.208384 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/pull/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.426441 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-9kwwr_8835e5d9-c37d-4744-95cb-c56c10a58647/manager/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.498515 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-hjpkr_6f991e04-2db3-4b32-bc83-8bbce4ce7a08/manager/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.534620 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.601987 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp5gr\" (UniqueName: \"kubernetes.io/projected/e3db2a3d-671e-4af9-8758-032ec6169132-kube-api-access-fp5gr\") pod \"e3db2a3d-671e-4af9-8758-032ec6169132\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.602108 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-utilities\") pod \"e3db2a3d-671e-4af9-8758-032ec6169132\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.602163 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-catalog-content\") pod \"e3db2a3d-671e-4af9-8758-032ec6169132\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.604527 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-utilities" (OuterVolumeSpecName: "utilities") pod "e3db2a3d-671e-4af9-8758-032ec6169132" (UID: "e3db2a3d-671e-4af9-8758-032ec6169132"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.610171 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3db2a3d-671e-4af9-8758-032ec6169132-kube-api-access-fp5gr" (OuterVolumeSpecName: "kube-api-access-fp5gr") pod "e3db2a3d-671e-4af9-8758-032ec6169132" (UID: "e3db2a3d-671e-4af9-8758-032ec6169132"). InnerVolumeSpecName "kube-api-access-fp5gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.704292 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp5gr\" (UniqueName: \"kubernetes.io/projected/e3db2a3d-671e-4af9-8758-032ec6169132-kube-api-access-fp5gr\") on node \"crc\" DevicePath \"\"" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.704324 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.708105 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-65wdd"] Jan 30 15:12:31 crc kubenswrapper[4793]: E0130 15:12:31.708479 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="extract-content" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.708497 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="extract-content" Jan 30 15:12:31 crc kubenswrapper[4793]: E0130 15:12:31.708525 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="extract-utilities" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.708532 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="extract-utilities" Jan 30 15:12:31 crc kubenswrapper[4793]: E0130 15:12:31.708555 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="registry-server" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.708561 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="registry-server" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.708741 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="registry-server" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.727739 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.730842 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65wdd"] Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.780368 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3db2a3d-671e-4af9-8758-032ec6169132" (UID: "e3db2a3d-671e-4af9-8758-032ec6169132"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.806098 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.860397 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-g5848_1d859404-a29c-46c9-b66a-fed5ff0b13f0/manager/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.893957 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-k4tz9_8d24cd33-2902-424a-8ffc-76b1e4c2f482/manager/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.907695 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-utilities\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.907770 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-catalog-content\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.907860 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbgnl\" (UniqueName: \"kubernetes.io/projected/60dfbdf5-5a19-4864-b113-60e96a555304-kube-api-access-lbgnl\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.010087 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbgnl\" (UniqueName: \"kubernetes.io/projected/60dfbdf5-5a19-4864-b113-60e96a555304-kube-api-access-lbgnl\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.010270 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-utilities\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.010355 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-catalog-content\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.010809 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-catalog-content\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.010818 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-utilities\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.025456 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbgnl\" (UniqueName: \"kubernetes.io/projected/60dfbdf5-5a19-4864-b113-60e96a555304-kube-api-access-lbgnl\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.057472 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.058327 4793 generic.go:334] "Generic (PLEG): container finished" podID="e3db2a3d-671e-4af9-8758-032ec6169132" containerID="4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6" exitCode=0 Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.058429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerDied","Data":"4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6"} Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.058508 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerDied","Data":"18b8805c99c2d22576ab45c0c54990056672997e71533374fa339804e56b3512"} Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.058576 4793 scope.go:117] "RemoveContainer" containerID="4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.058740 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.112127 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hsmd9"] Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.126289 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hsmd9"] Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.136137 4793 scope.go:117] "RemoveContainer" containerID="a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.174919 4793 scope.go:117] "RemoveContainer" containerID="06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.231532 4793 scope.go:117] "RemoveContainer" containerID="4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6" Jan 30 15:12:32 crc kubenswrapper[4793]: E0130 15:12:32.232635 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6\": container with ID starting with 4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6 not found: ID does not exist" containerID="4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.232697 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6"} err="failed to get container status \"4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6\": rpc error: code = NotFound desc = could not find container \"4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6\": container with ID starting with 4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6 not found: ID does not exist" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.232740 4793 scope.go:117] "RemoveContainer" containerID="a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5" Jan 30 15:12:32 crc kubenswrapper[4793]: E0130 15:12:32.233313 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5\": container with ID starting with a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5 not found: ID does not exist" containerID="a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.233342 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5"} err="failed to get container status \"a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5\": rpc error: code = NotFound desc = could not find container \"a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5\": container with ID starting with a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5 not found: ID does not exist" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.233362 4793 scope.go:117] "RemoveContainer" containerID="06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf" Jan 30 15:12:32 crc kubenswrapper[4793]: E0130 15:12:32.234703 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf\": container with ID starting with 06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf not found: ID does not exist" containerID="06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.234849 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf"} err="failed to get container status \"06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf\": rpc error: code = NotFound desc = could not find container \"06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf\": container with ID starting with 06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf not found: ID does not exist" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.452207 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" path="/var/lib/kubelet/pods/e3db2a3d-671e-4af9-8758-032ec6169132/volumes" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.600963 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-m4q78_710c57e4-a09e-4db1-a03b-13db05085d41/manager/0.log" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.670844 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-khfs7_97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642/manager/0.log" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.678002 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65wdd"] Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.897852 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-v77jx_7c34e714-0f18-4e41-ab9c-1dfe4859e644/manager/0.log" Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.066253 4793 generic.go:334] "Generic (PLEG): container finished" podID="60dfbdf5-5a19-4864-b113-60e96a555304" containerID="5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef" exitCode=0 Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.066313 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerDied","Data":"5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef"} Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.066337 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerStarted","Data":"329063fb66b0af99c37d443f70678ace1de380ba2fc9bb63f01f69a193285a8a"} Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.068033 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.109342 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-82cvq_bdcd04f7-09fa-4b1b-8b99-3de61a28a337/manager/0.log" Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.156695 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-9ftxd_ce9be14f-8255-421e-91b4-a30fc5482ff4/manager/0.log" Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.362380 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-n29l5_fa88d14c-0581-439c-9da1-f1123e41a65a/manager/0.log" Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.445333 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-x6pk6_05415bc7-22dc-4b15-a047-6ed62755638d/manager/0.log" Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.724828 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-vtx9d_31ca6ac1-d2da-4325-baa4-e18fc3514721/manager/0.log" Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.759489 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-5nsr4_53576ec8-2f6d-4781-8906-726529cc6049/manager/0.log" Jan 30 15:12:34 crc kubenswrapper[4793]: I0130 15:12:34.195404 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs_e446e97c-6e9f-4dc2-b5fd-fb63451fd326/manager/0.log" Jan 30 15:12:34 crc kubenswrapper[4793]: I0130 15:12:34.333645 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-977cfdb67-sp4rd_2cec3782-823b-4ddf-909a-e773203cd721/operator/0.log" Jan 30 15:12:34 crc kubenswrapper[4793]: I0130 15:12:34.781338 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-x56zx_e3b6e703-4540-4739-87cd-8699d4e04903/registry-server/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.059326 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-27flx_02b8e60c-3514-4d72-bde6-5af374a926b1/manager/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.084550 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerStarted","Data":"e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68"} Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.211172 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-4ml88_6231ed92-57a8-4c48-9c75-e916940b22ea/manager/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.351782 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nb4g2_2aae677d-830b-44b8-a792-3d0b527aee89/operator/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.488989 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75c5857d49-pm446_e9854850-e645-4364-a471-bef994f8536c/manager/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.546202 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-vxhpt_3eb94c51-d506-4273-898b-dba537cabea6/manager/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.753205 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-tv5vr_6b21b0ca-d506-4b1b-b6e1-06e2a96ae033/manager/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.839871 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-qb5xp_5e215cef-de14-424d-9028-a48bad979192/manager/0.log" Jan 30 15:12:36 crc kubenswrapper[4793]: I0130 15:12:36.000830 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-btjpp_f65e9448-ee4e-4f22-9bd7-ecf650cb36b5/manager/0.log" Jan 30 15:12:36 crc kubenswrapper[4793]: I0130 15:12:36.093200 4793 generic.go:334] "Generic (PLEG): container finished" podID="60dfbdf5-5a19-4864-b113-60e96a555304" containerID="e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68" exitCode=0 Jan 30 15:12:36 crc kubenswrapper[4793]: I0130 15:12:36.093238 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerDied","Data":"e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68"} Jan 30 15:12:37 crc kubenswrapper[4793]: I0130 15:12:37.104642 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerStarted","Data":"fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1"} Jan 30 15:12:37 crc kubenswrapper[4793]: I0130 15:12:37.398284 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:12:37 crc kubenswrapper[4793]: E0130 15:12:37.398554 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:12:42 crc kubenswrapper[4793]: I0130 15:12:42.058944 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:42 crc kubenswrapper[4793]: I0130 15:12:42.060353 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:42 crc kubenswrapper[4793]: I0130 15:12:42.112126 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:42 crc kubenswrapper[4793]: I0130 15:12:42.138657 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-65wdd" podStartSLOduration=7.663938034 podStartE2EDuration="11.13863816s" podCreationTimestamp="2026-01-30 15:12:31 +0000 UTC" firstStartedPulling="2026-01-30 15:12:33.067833762 +0000 UTC m=+5363.769182253" lastFinishedPulling="2026-01-30 15:12:36.542533888 +0000 UTC m=+5367.243882379" observedRunningTime="2026-01-30 15:12:37.140723305 +0000 UTC m=+5367.842071806" watchObservedRunningTime="2026-01-30 15:12:42.13863816 +0000 UTC m=+5372.839986651" Jan 30 15:12:42 crc kubenswrapper[4793]: I0130 15:12:42.183541 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:42 crc kubenswrapper[4793]: I0130 15:12:42.348677 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65wdd"] Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.154269 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-65wdd" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="registry-server" containerID="cri-o://fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1" gracePeriod=2 Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.661967 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.795335 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-catalog-content\") pod \"60dfbdf5-5a19-4864-b113-60e96a555304\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.795462 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-utilities\") pod \"60dfbdf5-5a19-4864-b113-60e96a555304\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.795576 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbgnl\" (UniqueName: \"kubernetes.io/projected/60dfbdf5-5a19-4864-b113-60e96a555304-kube-api-access-lbgnl\") pod \"60dfbdf5-5a19-4864-b113-60e96a555304\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.796399 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-utilities" (OuterVolumeSpecName: "utilities") pod "60dfbdf5-5a19-4864-b113-60e96a555304" (UID: "60dfbdf5-5a19-4864-b113-60e96a555304"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.818234 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60dfbdf5-5a19-4864-b113-60e96a555304-kube-api-access-lbgnl" (OuterVolumeSpecName: "kube-api-access-lbgnl") pod "60dfbdf5-5a19-4864-b113-60e96a555304" (UID: "60dfbdf5-5a19-4864-b113-60e96a555304"). InnerVolumeSpecName "kube-api-access-lbgnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.856440 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60dfbdf5-5a19-4864-b113-60e96a555304" (UID: "60dfbdf5-5a19-4864-b113-60e96a555304"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.899230 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbgnl\" (UniqueName: \"kubernetes.io/projected/60dfbdf5-5a19-4864-b113-60e96a555304-kube-api-access-lbgnl\") on node \"crc\" DevicePath \"\"" Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.899259 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.899270 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.164775 4793 generic.go:334] "Generic (PLEG): container finished" podID="60dfbdf5-5a19-4864-b113-60e96a555304" containerID="fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1" exitCode=0 Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.164817 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerDied","Data":"fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1"} Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.164843 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerDied","Data":"329063fb66b0af99c37d443f70678ace1de380ba2fc9bb63f01f69a193285a8a"} Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.164861 4793 scope.go:117] "RemoveContainer" containerID="fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.164987 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.186175 4793 scope.go:117] "RemoveContainer" containerID="e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.208905 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65wdd"] Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.218968 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-65wdd"] Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.226007 4793 scope.go:117] "RemoveContainer" containerID="5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.260891 4793 scope.go:117] "RemoveContainer" containerID="fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1" Jan 30 15:12:45 crc kubenswrapper[4793]: E0130 15:12:45.263663 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1\": container with ID starting with fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1 not found: ID does not exist" containerID="fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.263718 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1"} err="failed to get container status \"fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1\": rpc error: code = NotFound desc = could not find container \"fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1\": container with ID starting with fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1 not found: ID does not exist" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.263746 4793 scope.go:117] "RemoveContainer" containerID="e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68" Jan 30 15:12:45 crc kubenswrapper[4793]: E0130 15:12:45.264222 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68\": container with ID starting with e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68 not found: ID does not exist" containerID="e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.264256 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68"} err="failed to get container status \"e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68\": rpc error: code = NotFound desc = could not find container \"e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68\": container with ID starting with e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68 not found: ID does not exist" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.264279 4793 scope.go:117] "RemoveContainer" containerID="5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef" Jan 30 15:12:45 crc kubenswrapper[4793]: E0130 15:12:45.264683 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef\": container with ID starting with 5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef not found: ID does not exist" containerID="5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.264731 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef"} err="failed to get container status \"5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef\": rpc error: code = NotFound desc = could not find container \"5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef\": container with ID starting with 5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef not found: ID does not exist" Jan 30 15:12:46 crc kubenswrapper[4793]: I0130 15:12:46.408897 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" path="/var/lib/kubelet/pods/60dfbdf5-5a19-4864-b113-60e96a555304/volumes" Jan 30 15:12:48 crc kubenswrapper[4793]: I0130 15:12:48.400515 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:12:48 crc kubenswrapper[4793]: E0130 15:12:48.402385 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:12:56 crc kubenswrapper[4793]: I0130 15:12:56.401183 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vqxml_10c05bcf-ffb2-4175-b323-067804ea3391/control-plane-machine-set-operator/1.log" Jan 30 15:12:56 crc kubenswrapper[4793]: I0130 15:12:56.442357 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vqxml_10c05bcf-ffb2-4175-b323-067804ea3391/control-plane-machine-set-operator/0.log" Jan 30 15:12:56 crc kubenswrapper[4793]: I0130 15:12:56.723076 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-56g7n_afa7929d-37a8-4fa2-9733-158cab1c40ec/kube-rbac-proxy/0.log" Jan 30 15:12:56 crc kubenswrapper[4793]: I0130 15:12:56.728970 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-56g7n_afa7929d-37a8-4fa2-9733-158cab1c40ec/machine-api-operator/0.log" Jan 30 15:13:00 crc kubenswrapper[4793]: I0130 15:13:00.429600 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:13:00 crc kubenswrapper[4793]: E0130 15:13:00.430597 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:13:10 crc kubenswrapper[4793]: I0130 15:13:10.188807 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-26t5l_1b680507-f432-4019-b372-d9452d89aa97/cert-manager-controller/0.log" Jan 30 15:13:10 crc kubenswrapper[4793]: I0130 15:13:10.484860 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-tzjhq_8fd78cec-1c0f-427e-8224-4021da0ede3c/cert-manager-cainjector/0.log" Jan 30 15:13:10 crc kubenswrapper[4793]: I0130 15:13:10.630194 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-lm7l8_e88efb4a-1489-4847-adb4-230a8b5db6ef/cert-manager-webhook/0.log" Jan 30 15:13:15 crc kubenswrapper[4793]: I0130 15:13:15.399531 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:13:16 crc kubenswrapper[4793]: I0130 15:13:16.463843 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"2e917dcf8d0541fa761d833d92780fc95c344c876dc9aae353982d89d80846a5"} Jan 30 15:13:26 crc kubenswrapper[4793]: I0130 15:13:26.283156 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-kc5ft_5df01042-63fe-458a-b71d-d1f9bdf9ea66/nmstate-console-plugin/0.log" Jan 30 15:13:26 crc kubenswrapper[4793]: I0130 15:13:26.488992 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2gwr6_1a7bdce5-b625-40ce-b674-a834fcd178a8/kube-rbac-proxy/0.log" Jan 30 15:13:26 crc kubenswrapper[4793]: I0130 15:13:26.549142 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-dh9db_e635e428-77d8-44fb-baa4-1af4bd603c10/nmstate-handler/0.log" Jan 30 15:13:26 crc kubenswrapper[4793]: I0130 15:13:26.631004 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2gwr6_1a7bdce5-b625-40ce-b674-a834fcd178a8/nmstate-metrics/0.log" Jan 30 15:13:26 crc kubenswrapper[4793]: I0130 15:13:26.707177 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-9bsps_1f691ecb-c128-4332-a7ab-c4e173490f50/nmstate-operator/0.log" Jan 30 15:13:26 crc kubenswrapper[4793]: I0130 15:13:26.843297 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-hw489_68bcadc4-02c3-44c0-a252-0606ff1f0a09/nmstate-webhook/0.log" Jan 30 15:13:54 crc kubenswrapper[4793]: I0130 15:13:54.519307 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7nlfd_34253a93-968b-47e2-aa0d-43ddb72f29f5/kube-rbac-proxy/0.log" Jan 30 15:13:54 crc kubenswrapper[4793]: I0130 15:13:54.628145 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7nlfd_34253a93-968b-47e2-aa0d-43ddb72f29f5/controller/0.log" Jan 30 15:13:54 crc kubenswrapper[4793]: I0130 15:13:54.764619 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:13:54 crc kubenswrapper[4793]: I0130 15:13:54.993149 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:13:54 crc kubenswrapper[4793]: I0130 15:13:54.995317 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.024594 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.027430 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.240513 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.252136 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.257356 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.304552 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.478101 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.505649 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.521742 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.547534 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/controller/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.793426 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/frr-metrics/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.833175 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/kube-rbac-proxy/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.839974 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/kube-rbac-proxy-frr/0.log" Jan 30 15:13:56 crc kubenswrapper[4793]: I0130 15:13:56.119460 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/reloader/0.log" Jan 30 15:13:56 crc kubenswrapper[4793]: I0130 15:13:56.158263 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-4p6gx_e5a76649-d081-4224-baca-095ca1ffadfd/frr-k8s-webhook-server/0.log" Jan 30 15:13:56 crc kubenswrapper[4793]: I0130 15:13:56.453694 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7fbd4d697c-ndglw_75266e51-59ee-432d-b56a-ba972e5ff25b/manager/0.log" Jan 30 15:13:56 crc kubenswrapper[4793]: I0130 15:13:56.651458 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6446fc49bd-rzbbm_45949f1b-1075-4d7f-9007-8525e0364a55/webhook-server/0.log" Jan 30 15:13:56 crc kubenswrapper[4793]: I0130 15:13:56.832798 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g9hvr_519ea47c-0d76-44cb-af34-823c71e508c9/kube-rbac-proxy/0.log" Jan 30 15:13:56 crc kubenswrapper[4793]: I0130 15:13:56.898294 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/frr/0.log" Jan 30 15:13:57 crc kubenswrapper[4793]: I0130 15:13:57.318313 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g9hvr_519ea47c-0d76-44cb-af34-823c71e508c9/speaker/0.log" Jan 30 15:14:00 crc kubenswrapper[4793]: I0130 15:14:00.538961 4793 scope.go:117] "RemoveContainer" containerID="2e68fc094c6474084a00ace7a1343c3281487ac0b42f6c0f86c4ce491d8395ce" Jan 30 15:14:00 crc kubenswrapper[4793]: I0130 15:14:00.562244 4793 scope.go:117] "RemoveContainer" containerID="d6973b535c9ecb060763fdccd1de889c01aef82d5985f11c0ff82c0869318f33" Jan 30 15:14:00 crc kubenswrapper[4793]: I0130 15:14:00.608876 4793 scope.go:117] "RemoveContainer" containerID="4ac9e4de050e07af6f6a3d4ab7b9515ece2210c422a53f0f5e0a00047769d72b" Jan 30 15:14:11 crc kubenswrapper[4793]: I0130 15:14:11.929450 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/util/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.156228 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/util/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.214499 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/pull/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.229020 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/pull/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.422558 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/extract/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.459684 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/util/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.460505 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/pull/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.651857 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/util/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.846653 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/util/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.886357 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/pull/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.910653 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/pull/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.187670 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/util/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.188551 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/extract/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.236785 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/pull/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.459521 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-utilities/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.647216 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-utilities/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.690527 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-content/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.690547 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-content/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.841765 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-utilities/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.871018 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-content/0.log" Jan 30 15:14:14 crc kubenswrapper[4793]: I0130 15:14:14.174251 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-utilities/0.log" Jan 30 15:14:14 crc kubenswrapper[4793]: I0130 15:14:14.501694 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-content/0.log" Jan 30 15:14:14 crc kubenswrapper[4793]: I0130 15:14:14.518003 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-utilities/0.log" Jan 30 15:14:14 crc kubenswrapper[4793]: I0130 15:14:14.555937 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-content/0.log" Jan 30 15:14:14 crc kubenswrapper[4793]: I0130 15:14:14.691263 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/registry-server/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.049858 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-content/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.053207 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-utilities/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.381629 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zkjbp_5834bf4b-676f-4ece-bcee-28949a7109ca/marketplace-operator/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.527223 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-utilities/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.639764 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-utilities/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.750810 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-content/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.825934 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/registry-server/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.850897 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-content/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.058632 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-content/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.079761 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-utilities/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.288957 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-utilities/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.337159 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/registry-server/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.518029 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-content/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.565565 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-content/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.574630 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-utilities/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.748908 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-content/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.775407 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-utilities/0.log" Jan 30 15:14:17 crc kubenswrapper[4793]: I0130 15:14:17.381003 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/registry-server/0.log" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.150181 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589"] Jan 30 15:15:00 crc kubenswrapper[4793]: E0130 15:15:00.151219 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="extract-utilities" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.151235 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="extract-utilities" Jan 30 15:15:00 crc kubenswrapper[4793]: E0130 15:15:00.151263 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="registry-server" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.151274 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="registry-server" Jan 30 15:15:00 crc kubenswrapper[4793]: E0130 15:15:00.151290 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="extract-content" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.151298 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="extract-content" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.151492 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="registry-server" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.152356 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.155432 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.155966 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.217346 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589"] Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.309178 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b368057-7309-4308-9956-1850a9297956-secret-volume\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.309430 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77qcv\" (UniqueName: \"kubernetes.io/projected/9b368057-7309-4308-9956-1850a9297956-kube-api-access-77qcv\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.309499 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b368057-7309-4308-9956-1850a9297956-config-volume\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.411793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77qcv\" (UniqueName: \"kubernetes.io/projected/9b368057-7309-4308-9956-1850a9297956-kube-api-access-77qcv\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.411854 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b368057-7309-4308-9956-1850a9297956-config-volume\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.411938 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b368057-7309-4308-9956-1850a9297956-secret-volume\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.412747 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b368057-7309-4308-9956-1850a9297956-config-volume\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.417482 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b368057-7309-4308-9956-1850a9297956-secret-volume\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.447339 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77qcv\" (UniqueName: \"kubernetes.io/projected/9b368057-7309-4308-9956-1850a9297956-kube-api-access-77qcv\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.475171 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.937021 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589"] Jan 30 15:15:01 crc kubenswrapper[4793]: I0130 15:15:01.753675 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" event={"ID":"9b368057-7309-4308-9956-1850a9297956","Type":"ContainerStarted","Data":"55b3bd49efdf664f3e5e3f8829bbad8853366867e6db9ad6f828d67ec343683a"} Jan 30 15:15:01 crc kubenswrapper[4793]: I0130 15:15:01.754016 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" event={"ID":"9b368057-7309-4308-9956-1850a9297956","Type":"ContainerStarted","Data":"0687be94c24be55440f411cf6b03ef0f1c8455e89eab84818c383651a859ab98"} Jan 30 15:15:01 crc kubenswrapper[4793]: I0130 15:15:01.772539 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" podStartSLOduration=1.772519518 podStartE2EDuration="1.772519518s" podCreationTimestamp="2026-01-30 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:15:01.767639009 +0000 UTC m=+5512.468987500" watchObservedRunningTime="2026-01-30 15:15:01.772519518 +0000 UTC m=+5512.473867999" Jan 30 15:15:02 crc kubenswrapper[4793]: E0130 15:15:02.922682 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b368057_7309_4308_9956_1850a9297956.slice/crio-55b3bd49efdf664f3e5e3f8829bbad8853366867e6db9ad6f828d67ec343683a.scope\": RecentStats: unable to find data in memory cache]" Jan 30 15:15:03 crc kubenswrapper[4793]: I0130 15:15:03.774411 4793 generic.go:334] "Generic (PLEG): container finished" podID="9b368057-7309-4308-9956-1850a9297956" containerID="55b3bd49efdf664f3e5e3f8829bbad8853366867e6db9ad6f828d67ec343683a" exitCode=0 Jan 30 15:15:03 crc kubenswrapper[4793]: I0130 15:15:03.774453 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" event={"ID":"9b368057-7309-4308-9956-1850a9297956","Type":"ContainerDied","Data":"55b3bd49efdf664f3e5e3f8829bbad8853366867e6db9ad6f828d67ec343683a"} Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.193868 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.315646 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77qcv\" (UniqueName: \"kubernetes.io/projected/9b368057-7309-4308-9956-1850a9297956-kube-api-access-77qcv\") pod \"9b368057-7309-4308-9956-1850a9297956\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.316116 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b368057-7309-4308-9956-1850a9297956-secret-volume\") pod \"9b368057-7309-4308-9956-1850a9297956\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.316266 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b368057-7309-4308-9956-1850a9297956-config-volume\") pod \"9b368057-7309-4308-9956-1850a9297956\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.317099 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b368057-7309-4308-9956-1850a9297956-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b368057-7309-4308-9956-1850a9297956" (UID: "9b368057-7309-4308-9956-1850a9297956"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.321163 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b368057-7309-4308-9956-1850a9297956-kube-api-access-77qcv" (OuterVolumeSpecName: "kube-api-access-77qcv") pod "9b368057-7309-4308-9956-1850a9297956" (UID: "9b368057-7309-4308-9956-1850a9297956"). InnerVolumeSpecName "kube-api-access-77qcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.321765 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b368057-7309-4308-9956-1850a9297956-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b368057-7309-4308-9956-1850a9297956" (UID: "9b368057-7309-4308-9956-1850a9297956"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.418884 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b368057-7309-4308-9956-1850a9297956-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.418933 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b368057-7309-4308-9956-1850a9297956-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.418948 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77qcv\" (UniqueName: \"kubernetes.io/projected/9b368057-7309-4308-9956-1850a9297956-kube-api-access-77qcv\") on node \"crc\" DevicePath \"\"" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.792851 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" event={"ID":"9b368057-7309-4308-9956-1850a9297956","Type":"ContainerDied","Data":"0687be94c24be55440f411cf6b03ef0f1c8455e89eab84818c383651a859ab98"} Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.792890 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0687be94c24be55440f411cf6b03ef0f1c8455e89eab84818c383651a859ab98" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.792946 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.867155 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn"] Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.874796 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn"] Jan 30 15:15:06 crc kubenswrapper[4793]: I0130 15:15:06.409446 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd3a15c-5ed4-45be-8091-84573a97a63a" path="/var/lib/kubelet/pods/afd3a15c-5ed4-45be-8091-84573a97a63a/volumes" Jan 30 15:15:42 crc kubenswrapper[4793]: I0130 15:15:42.413976 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:15:42 crc kubenswrapper[4793]: I0130 15:15:42.414593 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:16:00 crc kubenswrapper[4793]: I0130 15:16:00.687460 4793 scope.go:117] "RemoveContainer" containerID="1def2597602a7873d34fb216db52e7e4d4963d5b5a3ca0e36a14a7576a9a797f" Jan 30 15:16:12 crc kubenswrapper[4793]: I0130 15:16:12.413459 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:16:12 crc kubenswrapper[4793]: I0130 15:16:12.413861 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.414187 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.415356 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.415441 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.416668 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e917dcf8d0541fa761d833d92780fc95c344c876dc9aae353982d89d80846a5"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.416818 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://2e917dcf8d0541fa761d833d92780fc95c344c876dc9aae353982d89d80846a5" gracePeriod=600 Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.695958 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="2e917dcf8d0541fa761d833d92780fc95c344c876dc9aae353982d89d80846a5" exitCode=0 Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.695997 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"2e917dcf8d0541fa761d833d92780fc95c344c876dc9aae353982d89d80846a5"} Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.696535 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:16:43 crc kubenswrapper[4793]: I0130 15:16:43.707833 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce"} Jan 30 15:16:43 crc kubenswrapper[4793]: I0130 15:16:43.714674 4793 generic.go:334] "Generic (PLEG): container finished" podID="9cdbb05e-d475-48b2-9b59-297532883826" containerID="ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf" exitCode=0 Jan 30 15:16:43 crc kubenswrapper[4793]: I0130 15:16:43.714718 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/must-gather-x5n45" event={"ID":"9cdbb05e-d475-48b2-9b59-297532883826","Type":"ContainerDied","Data":"ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf"} Jan 30 15:16:43 crc kubenswrapper[4793]: I0130 15:16:43.715503 4793 scope.go:117] "RemoveContainer" containerID="ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf" Jan 30 15:16:44 crc kubenswrapper[4793]: I0130 15:16:44.078300 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jg6df_must-gather-x5n45_9cdbb05e-d475-48b2-9b59-297532883826/gather/0.log" Jan 30 15:16:52 crc kubenswrapper[4793]: I0130 15:16:52.745016 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jg6df/must-gather-x5n45"] Jan 30 15:16:52 crc kubenswrapper[4793]: I0130 15:16:52.745751 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jg6df/must-gather-x5n45" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="copy" containerID="cri-o://4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b" gracePeriod=2 Jan 30 15:16:52 crc kubenswrapper[4793]: I0130 15:16:52.753488 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jg6df/must-gather-x5n45"] Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.165504 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jg6df_must-gather-x5n45_9cdbb05e-d475-48b2-9b59-297532883826/copy/0.log" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.166213 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.309717 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvqx6\" (UniqueName: \"kubernetes.io/projected/9cdbb05e-d475-48b2-9b59-297532883826-kube-api-access-nvqx6\") pod \"9cdbb05e-d475-48b2-9b59-297532883826\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.309935 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9cdbb05e-d475-48b2-9b59-297532883826-must-gather-output\") pod \"9cdbb05e-d475-48b2-9b59-297532883826\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.323403 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cdbb05e-d475-48b2-9b59-297532883826-kube-api-access-nvqx6" (OuterVolumeSpecName: "kube-api-access-nvqx6") pod "9cdbb05e-d475-48b2-9b59-297532883826" (UID: "9cdbb05e-d475-48b2-9b59-297532883826"). InnerVolumeSpecName "kube-api-access-nvqx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.412230 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvqx6\" (UniqueName: \"kubernetes.io/projected/9cdbb05e-d475-48b2-9b59-297532883826-kube-api-access-nvqx6\") on node \"crc\" DevicePath \"\"" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.557717 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cdbb05e-d475-48b2-9b59-297532883826-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9cdbb05e-d475-48b2-9b59-297532883826" (UID: "9cdbb05e-d475-48b2-9b59-297532883826"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.616785 4793 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9cdbb05e-d475-48b2-9b59-297532883826-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.815995 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jg6df_must-gather-x5n45_9cdbb05e-d475-48b2-9b59-297532883826/copy/0.log" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.816553 4793 generic.go:334] "Generic (PLEG): container finished" podID="9cdbb05e-d475-48b2-9b59-297532883826" containerID="4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b" exitCode=143 Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.816637 4793 scope.go:117] "RemoveContainer" containerID="4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.816641 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.836142 4793 scope.go:117] "RemoveContainer" containerID="ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.881655 4793 scope.go:117] "RemoveContainer" containerID="4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b" Jan 30 15:16:53 crc kubenswrapper[4793]: E0130 15:16:53.882129 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b\": container with ID starting with 4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b not found: ID does not exist" containerID="4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.882171 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b"} err="failed to get container status \"4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b\": rpc error: code = NotFound desc = could not find container \"4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b\": container with ID starting with 4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b not found: ID does not exist" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.882190 4793 scope.go:117] "RemoveContainer" containerID="ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf" Jan 30 15:16:53 crc kubenswrapper[4793]: E0130 15:16:53.882434 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf\": container with ID starting with ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf not found: ID does not exist" containerID="ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.882467 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf"} err="failed to get container status \"ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf\": rpc error: code = NotFound desc = could not find container \"ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf\": container with ID starting with ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf not found: ID does not exist" Jan 30 15:16:54 crc kubenswrapper[4793]: I0130 15:16:54.408604 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cdbb05e-d475-48b2-9b59-297532883826" path="/var/lib/kubelet/pods/9cdbb05e-d475-48b2-9b59-297532883826/volumes" Jan 30 15:17:00 crc kubenswrapper[4793]: I0130 15:17:00.750369 4793 scope.go:117] "RemoveContainer" containerID="cc41eecc94295c98eb3214210729f1c635aad07b9ddd5ced865321fef6013a0f" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.208233 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rpz58"] Jan 30 15:17:58 crc kubenswrapper[4793]: E0130 15:17:58.209214 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b368057-7309-4308-9956-1850a9297956" containerName="collect-profiles" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.209231 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b368057-7309-4308-9956-1850a9297956" containerName="collect-profiles" Jan 30 15:17:58 crc kubenswrapper[4793]: E0130 15:17:58.209251 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="gather" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.209259 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="gather" Jan 30 15:17:58 crc kubenswrapper[4793]: E0130 15:17:58.209274 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="copy" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.209284 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="copy" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.212107 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="copy" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.212142 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="gather" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.212174 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b368057-7309-4308-9956-1850a9297956" containerName="collect-profiles" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.213802 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.228950 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpz58"] Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.384120 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwgc8\" (UniqueName: \"kubernetes.io/projected/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-kube-api-access-dwgc8\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.384514 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-utilities\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.384577 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-catalog-content\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.485903 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwgc8\" (UniqueName: \"kubernetes.io/projected/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-kube-api-access-dwgc8\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.486094 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-utilities\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.486113 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-catalog-content\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.488549 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-utilities\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.488661 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-catalog-content\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.509870 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwgc8\" (UniqueName: \"kubernetes.io/projected/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-kube-api-access-dwgc8\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.599392 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:59 crc kubenswrapper[4793]: I0130 15:17:59.199754 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpz58"] Jan 30 15:17:59 crc kubenswrapper[4793]: I0130 15:17:59.391692 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerStarted","Data":"7572d1d4da12bd986bc215ee7e50ae0a56a257908a7d2e2006c6a004836380bd"} Jan 30 15:18:00 crc kubenswrapper[4793]: I0130 15:18:00.401975 4793 generic.go:334] "Generic (PLEG): container finished" podID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerID="c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58" exitCode=0 Jan 30 15:18:00 crc kubenswrapper[4793]: I0130 15:18:00.410905 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerDied","Data":"c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58"} Jan 30 15:18:00 crc kubenswrapper[4793]: I0130 15:18:00.411556 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 15:18:00 crc kubenswrapper[4793]: I0130 15:18:00.842550 4793 scope.go:117] "RemoveContainer" containerID="568ed0e82f10baad26d3430efb936eb0714fc3fed75c7084e20ef051683db5ff" Jan 30 15:18:02 crc kubenswrapper[4793]: I0130 15:18:02.422185 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerStarted","Data":"6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35"} Jan 30 15:18:03 crc kubenswrapper[4793]: I0130 15:18:03.435178 4793 generic.go:334] "Generic (PLEG): container finished" podID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerID="6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35" exitCode=0 Jan 30 15:18:03 crc kubenswrapper[4793]: I0130 15:18:03.435235 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerDied","Data":"6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35"} Jan 30 15:18:04 crc kubenswrapper[4793]: I0130 15:18:04.447093 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerStarted","Data":"a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba"} Jan 30 15:18:04 crc kubenswrapper[4793]: I0130 15:18:04.469779 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rpz58" podStartSLOduration=2.667915434 podStartE2EDuration="6.46975986s" podCreationTimestamp="2026-01-30 15:17:58 +0000 UTC" firstStartedPulling="2026-01-30 15:18:00.411348124 +0000 UTC m=+5691.112696615" lastFinishedPulling="2026-01-30 15:18:04.21319255 +0000 UTC m=+5694.914541041" observedRunningTime="2026-01-30 15:18:04.464735657 +0000 UTC m=+5695.166084148" watchObservedRunningTime="2026-01-30 15:18:04.46975986 +0000 UTC m=+5695.171108351" Jan 30 15:18:08 crc kubenswrapper[4793]: I0130 15:18:08.600193 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:18:08 crc kubenswrapper[4793]: I0130 15:18:08.600861 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:18:08 crc kubenswrapper[4793]: I0130 15:18:08.645749 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:18:09 crc kubenswrapper[4793]: I0130 15:18:09.552887 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:18:09 crc kubenswrapper[4793]: I0130 15:18:09.625870 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rpz58"] Jan 30 15:18:11 crc kubenswrapper[4793]: I0130 15:18:11.515672 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rpz58" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="registry-server" containerID="cri-o://a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba" gracePeriod=2 Jan 30 15:18:11 crc kubenswrapper[4793]: I0130 15:18:11.932826 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.056553 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-catalog-content\") pod \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.056819 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-utilities\") pod \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.056916 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwgc8\" (UniqueName: \"kubernetes.io/projected/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-kube-api-access-dwgc8\") pod \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.057797 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-utilities" (OuterVolumeSpecName: "utilities") pod "851b6232-0ffd-4c7d-a8ee-fa085e0790f0" (UID: "851b6232-0ffd-4c7d-a8ee-fa085e0790f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.063401 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-kube-api-access-dwgc8" (OuterVolumeSpecName: "kube-api-access-dwgc8") pod "851b6232-0ffd-4c7d-a8ee-fa085e0790f0" (UID: "851b6232-0ffd-4c7d-a8ee-fa085e0790f0"). InnerVolumeSpecName "kube-api-access-dwgc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.119321 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "851b6232-0ffd-4c7d-a8ee-fa085e0790f0" (UID: "851b6232-0ffd-4c7d-a8ee-fa085e0790f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.159331 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.159365 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwgc8\" (UniqueName: \"kubernetes.io/projected/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-kube-api-access-dwgc8\") on node \"crc\" DevicePath \"\"" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.159378 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.529101 4793 generic.go:334] "Generic (PLEG): container finished" podID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerID="a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba" exitCode=0 Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.529141 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerDied","Data":"a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba"} Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.529166 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerDied","Data":"7572d1d4da12bd986bc215ee7e50ae0a56a257908a7d2e2006c6a004836380bd"} Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.529181 4793 scope.go:117] "RemoveContainer" containerID="a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.529298 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.550713 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rpz58"] Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.562232 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rpz58"] Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.571660 4793 scope.go:117] "RemoveContainer" containerID="6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.595032 4793 scope.go:117] "RemoveContainer" containerID="c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.642344 4793 scope.go:117] "RemoveContainer" containerID="a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba" Jan 30 15:18:12 crc kubenswrapper[4793]: E0130 15:18:12.642784 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba\": container with ID starting with a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba not found: ID does not exist" containerID="a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.642835 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba"} err="failed to get container status \"a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba\": rpc error: code = NotFound desc = could not find container \"a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba\": container with ID starting with a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba not found: ID does not exist" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.642862 4793 scope.go:117] "RemoveContainer" containerID="6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35" Jan 30 15:18:12 crc kubenswrapper[4793]: E0130 15:18:12.643551 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35\": container with ID starting with 6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35 not found: ID does not exist" containerID="6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.643599 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35"} err="failed to get container status \"6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35\": rpc error: code = NotFound desc = could not find container \"6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35\": container with ID starting with 6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35 not found: ID does not exist" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.643644 4793 scope.go:117] "RemoveContainer" containerID="c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58" Jan 30 15:18:12 crc kubenswrapper[4793]: E0130 15:18:12.644180 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58\": container with ID starting with c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58 not found: ID does not exist" containerID="c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.644214 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58"} err="failed to get container status \"c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58\": rpc error: code = NotFound desc = could not find container \"c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58\": container with ID starting with c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58 not found: ID does not exist" Jan 30 15:18:14 crc kubenswrapper[4793]: I0130 15:18:14.411595 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" path="/var/lib/kubelet/pods/851b6232-0ffd-4c7d-a8ee-fa085e0790f0/volumes" Jan 30 15:18:42 crc kubenswrapper[4793]: I0130 15:18:42.413869 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:18:42 crc kubenswrapper[4793]: I0130 15:18:42.415321 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:19:12 crc kubenswrapper[4793]: I0130 15:19:12.413424 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:19:12 crc kubenswrapper[4793]: I0130 15:19:12.413928 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:19:42 crc kubenswrapper[4793]: I0130 15:19:42.413947 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:19:42 crc kubenswrapper[4793]: I0130 15:19:42.414768 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:19:42 crc kubenswrapper[4793]: I0130 15:19:42.414830 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 15:19:42 crc kubenswrapper[4793]: I0130 15:19:42.415943 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 15:19:42 crc kubenswrapper[4793]: I0130 15:19:42.416037 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" gracePeriod=600 Jan 30 15:19:42 crc kubenswrapper[4793]: E0130 15:19:42.536455 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:19:43 crc kubenswrapper[4793]: I0130 15:19:43.344027 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" exitCode=0 Jan 30 15:19:43 crc kubenswrapper[4793]: I0130 15:19:43.344075 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce"} Jan 30 15:19:43 crc kubenswrapper[4793]: I0130 15:19:43.344127 4793 scope.go:117] "RemoveContainer" containerID="2e917dcf8d0541fa761d833d92780fc95c344c876dc9aae353982d89d80846a5" Jan 30 15:19:43 crc kubenswrapper[4793]: I0130 15:19:43.344945 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:19:43 crc kubenswrapper[4793]: E0130 15:19:43.345421 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.328978 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5swb7/must-gather-9zdpz"] Jan 30 15:19:54 crc kubenswrapper[4793]: E0130 15:19:54.331110 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="registry-server" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.331244 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="registry-server" Jan 30 15:19:54 crc kubenswrapper[4793]: E0130 15:19:54.331375 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="extract-utilities" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.331456 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="extract-utilities" Jan 30 15:19:54 crc kubenswrapper[4793]: E0130 15:19:54.331552 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="extract-content" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.331632 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="extract-content" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.331963 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="registry-server" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.333386 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.338478 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5swb7"/"openshift-service-ca.crt" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.338749 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5swb7"/"kube-root-ca.crt" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.353803 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm2xv\" (UniqueName: \"kubernetes.io/projected/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-kube-api-access-gm2xv\") pod \"must-gather-9zdpz\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.353918 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-must-gather-output\") pod \"must-gather-9zdpz\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.455494 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm2xv\" (UniqueName: \"kubernetes.io/projected/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-kube-api-access-gm2xv\") pod \"must-gather-9zdpz\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.462410 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-must-gather-output\") pod \"must-gather-9zdpz\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.462775 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-must-gather-output\") pod \"must-gather-9zdpz\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.467338 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5swb7/must-gather-9zdpz"] Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.488737 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm2xv\" (UniqueName: \"kubernetes.io/projected/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-kube-api-access-gm2xv\") pod \"must-gather-9zdpz\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.664189 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:55 crc kubenswrapper[4793]: I0130 15:19:55.202624 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5swb7/must-gather-9zdpz"] Jan 30 15:19:55 crc kubenswrapper[4793]: I0130 15:19:55.455369 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/must-gather-9zdpz" event={"ID":"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72","Type":"ContainerStarted","Data":"bcc4bc21a6c12cae1a4c2db58d26bdd2be9a4e12bd23b3f347d467b22b7270a5"} Jan 30 15:19:55 crc kubenswrapper[4793]: I0130 15:19:55.455600 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/must-gather-9zdpz" event={"ID":"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72","Type":"ContainerStarted","Data":"05ef02364cb3c6cb1aac7a4fce6e06fe6eef6f77fca3776b5d2229196af4cde1"} Jan 30 15:19:56 crc kubenswrapper[4793]: I0130 15:19:56.398682 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:19:56 crc kubenswrapper[4793]: E0130 15:19:56.399387 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:19:56 crc kubenswrapper[4793]: I0130 15:19:56.468300 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/must-gather-9zdpz" event={"ID":"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72","Type":"ContainerStarted","Data":"4941afb1ffe31f3ef59ded56a75fac16d895a4e8c097ba8e151ea8b4f01a6144"} Jan 30 15:19:56 crc kubenswrapper[4793]: I0130 15:19:56.495749 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5swb7/must-gather-9zdpz" podStartSLOduration=2.495726538 podStartE2EDuration="2.495726538s" podCreationTimestamp="2026-01-30 15:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:19:56.492004386 +0000 UTC m=+5807.193352877" watchObservedRunningTime="2026-01-30 15:19:56.495726538 +0000 UTC m=+5807.197075039" Jan 30 15:19:59 crc kubenswrapper[4793]: I0130 15:19:59.783267 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5swb7/crc-debug-czd7z"] Jan 30 15:19:59 crc kubenswrapper[4793]: I0130 15:19:59.785791 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:19:59 crc kubenswrapper[4793]: I0130 15:19:59.788551 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5swb7"/"default-dockercfg-nk8fm" Jan 30 15:19:59 crc kubenswrapper[4793]: I0130 15:19:59.974699 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-kube-api-access-8btrq\") pod \"crc-debug-czd7z\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:19:59 crc kubenswrapper[4793]: I0130 15:19:59.975024 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-host\") pod \"crc-debug-czd7z\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:00 crc kubenswrapper[4793]: I0130 15:20:00.076582 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-kube-api-access-8btrq\") pod \"crc-debug-czd7z\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:00 crc kubenswrapper[4793]: I0130 15:20:00.076678 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-host\") pod \"crc-debug-czd7z\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:00 crc kubenswrapper[4793]: I0130 15:20:00.076804 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-host\") pod \"crc-debug-czd7z\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:00 crc kubenswrapper[4793]: I0130 15:20:00.122277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-kube-api-access-8btrq\") pod \"crc-debug-czd7z\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:00 crc kubenswrapper[4793]: I0130 15:20:00.408809 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:00 crc kubenswrapper[4793]: W0130 15:20:00.453405 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47ea6d10_0cd4_4c62_aa35_f91c715e4ba4.slice/crio-13bef320d9bc9854ae28f181161db5187282c58aabd94edd9f6a30465dbe0e11 WatchSource:0}: Error finding container 13bef320d9bc9854ae28f181161db5187282c58aabd94edd9f6a30465dbe0e11: Status 404 returned error can't find the container with id 13bef320d9bc9854ae28f181161db5187282c58aabd94edd9f6a30465dbe0e11 Jan 30 15:20:00 crc kubenswrapper[4793]: I0130 15:20:00.516578 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-czd7z" event={"ID":"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4","Type":"ContainerStarted","Data":"13bef320d9bc9854ae28f181161db5187282c58aabd94edd9f6a30465dbe0e11"} Jan 30 15:20:01 crc kubenswrapper[4793]: I0130 15:20:01.526357 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-czd7z" event={"ID":"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4","Type":"ContainerStarted","Data":"86e00e31965f1b3c0ea7cf7b438eeaa03e0e567fc25ab2389b6dc1be13ddc91b"} Jan 30 15:20:01 crc kubenswrapper[4793]: I0130 15:20:01.543731 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5swb7/crc-debug-czd7z" podStartSLOduration=2.543710675 podStartE2EDuration="2.543710675s" podCreationTimestamp="2026-01-30 15:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:20:01.542772882 +0000 UTC m=+5812.244121383" watchObservedRunningTime="2026-01-30 15:20:01.543710675 +0000 UTC m=+5812.245059166" Jan 30 15:20:08 crc kubenswrapper[4793]: I0130 15:20:08.399363 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:20:08 crc kubenswrapper[4793]: E0130 15:20:08.400349 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:20:23 crc kubenswrapper[4793]: I0130 15:20:23.398982 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:20:23 crc kubenswrapper[4793]: E0130 15:20:23.401818 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:20:34 crc kubenswrapper[4793]: I0130 15:20:34.401578 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:20:34 crc kubenswrapper[4793]: E0130 15:20:34.402689 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:20:45 crc kubenswrapper[4793]: I0130 15:20:45.398028 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:20:45 crc kubenswrapper[4793]: E0130 15:20:45.398793 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:20:47 crc kubenswrapper[4793]: I0130 15:20:47.931745 4793 generic.go:334] "Generic (PLEG): container finished" podID="47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" containerID="86e00e31965f1b3c0ea7cf7b438eeaa03e0e567fc25ab2389b6dc1be13ddc91b" exitCode=0 Jan 30 15:20:47 crc kubenswrapper[4793]: I0130 15:20:47.931839 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-czd7z" event={"ID":"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4","Type":"ContainerDied","Data":"86e00e31965f1b3c0ea7cf7b438eeaa03e0e567fc25ab2389b6dc1be13ddc91b"} Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.040915 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.079479 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5swb7/crc-debug-czd7z"] Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.087284 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5swb7/crc-debug-czd7z"] Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.214892 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-host\") pod \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.215060 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-host" (OuterVolumeSpecName: "host") pod "47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" (UID: "47ea6d10-0cd4-4c62-aa35-f91c715e4ba4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.215149 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-kube-api-access-8btrq\") pod \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.215596 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-host\") on node \"crc\" DevicePath \"\"" Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.221294 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-kube-api-access-8btrq" (OuterVolumeSpecName: "kube-api-access-8btrq") pod "47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" (UID: "47ea6d10-0cd4-4c62-aa35-f91c715e4ba4"). InnerVolumeSpecName "kube-api-access-8btrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.317571 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-kube-api-access-8btrq\") on node \"crc\" DevicePath \"\"" Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.947985 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13bef320d9bc9854ae28f181161db5187282c58aabd94edd9f6a30465dbe0e11" Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.948084 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.366274 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5swb7/crc-debug-tl96s"] Jan 30 15:20:50 crc kubenswrapper[4793]: E0130 15:20:50.366682 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" containerName="container-00" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.366694 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" containerName="container-00" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.366899 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" containerName="container-00" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.367478 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.371259 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5swb7"/"default-dockercfg-nk8fm" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.410240 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" path="/var/lib/kubelet/pods/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4/volumes" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.539557 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2407e444-c4b6-488a-b397-10febd8cdf44-host\") pod \"crc-debug-tl96s\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.539600 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrm72\" (UniqueName: \"kubernetes.io/projected/2407e444-c4b6-488a-b397-10febd8cdf44-kube-api-access-nrm72\") pod \"crc-debug-tl96s\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.641805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2407e444-c4b6-488a-b397-10febd8cdf44-host\") pod \"crc-debug-tl96s\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.641874 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrm72\" (UniqueName: \"kubernetes.io/projected/2407e444-c4b6-488a-b397-10febd8cdf44-kube-api-access-nrm72\") pod \"crc-debug-tl96s\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.641949 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2407e444-c4b6-488a-b397-10febd8cdf44-host\") pod \"crc-debug-tl96s\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.673013 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrm72\" (UniqueName: \"kubernetes.io/projected/2407e444-c4b6-488a-b397-10febd8cdf44-kube-api-access-nrm72\") pod \"crc-debug-tl96s\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.683925 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.956767 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-tl96s" event={"ID":"2407e444-c4b6-488a-b397-10febd8cdf44","Type":"ContainerStarted","Data":"af38a403c914f66e3391e16b5d16fd2af804d00b0066101c8b2d179624a3dc49"} Jan 30 15:20:51 crc kubenswrapper[4793]: E0130 15:20:51.342482 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2407e444_c4b6_488a_b397_10febd8cdf44.slice/crio-85e030152ec5fa9dd3b51151a0867969b87294517f632303c2c8686222780d3f.scope\": RecentStats: unable to find data in memory cache]" Jan 30 15:20:51 crc kubenswrapper[4793]: I0130 15:20:51.967575 4793 generic.go:334] "Generic (PLEG): container finished" podID="2407e444-c4b6-488a-b397-10febd8cdf44" containerID="85e030152ec5fa9dd3b51151a0867969b87294517f632303c2c8686222780d3f" exitCode=0 Jan 30 15:20:51 crc kubenswrapper[4793]: I0130 15:20:51.967636 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-tl96s" event={"ID":"2407e444-c4b6-488a-b397-10febd8cdf44","Type":"ContainerDied","Data":"85e030152ec5fa9dd3b51151a0867969b87294517f632303c2c8686222780d3f"} Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.076642 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.194207 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2407e444-c4b6-488a-b397-10febd8cdf44-host\") pod \"2407e444-c4b6-488a-b397-10febd8cdf44\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.194270 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrm72\" (UniqueName: \"kubernetes.io/projected/2407e444-c4b6-488a-b397-10febd8cdf44-kube-api-access-nrm72\") pod \"2407e444-c4b6-488a-b397-10febd8cdf44\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.194540 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2407e444-c4b6-488a-b397-10febd8cdf44-host" (OuterVolumeSpecName: "host") pod "2407e444-c4b6-488a-b397-10febd8cdf44" (UID: "2407e444-c4b6-488a-b397-10febd8cdf44"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.194779 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2407e444-c4b6-488a-b397-10febd8cdf44-host\") on node \"crc\" DevicePath \"\"" Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.205274 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2407e444-c4b6-488a-b397-10febd8cdf44-kube-api-access-nrm72" (OuterVolumeSpecName: "kube-api-access-nrm72") pod "2407e444-c4b6-488a-b397-10febd8cdf44" (UID: "2407e444-c4b6-488a-b397-10febd8cdf44"). InnerVolumeSpecName "kube-api-access-nrm72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.296218 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrm72\" (UniqueName: \"kubernetes.io/projected/2407e444-c4b6-488a-b397-10febd8cdf44-kube-api-access-nrm72\") on node \"crc\" DevicePath \"\"" Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.885090 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5swb7/crc-debug-tl96s"] Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.896085 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5swb7/crc-debug-tl96s"] Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.983102 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af38a403c914f66e3391e16b5d16fd2af804d00b0066101c8b2d179624a3dc49" Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.983205 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:54 crc kubenswrapper[4793]: I0130 15:20:54.409364 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2407e444-c4b6-488a-b397-10febd8cdf44" path="/var/lib/kubelet/pods/2407e444-c4b6-488a-b397-10febd8cdf44/volumes" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.109371 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5swb7/crc-debug-g44lj"] Jan 30 15:20:55 crc kubenswrapper[4793]: E0130 15:20:55.110182 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2407e444-c4b6-488a-b397-10febd8cdf44" containerName="container-00" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.110212 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2407e444-c4b6-488a-b397-10febd8cdf44" containerName="container-00" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.110555 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2407e444-c4b6-488a-b397-10febd8cdf44" containerName="container-00" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.111402 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.113608 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5swb7"/"default-dockercfg-nk8fm" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.230663 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2ks9\" (UniqueName: \"kubernetes.io/projected/6d975c3f-305f-4a75-9776-5a5c98e567f3-kube-api-access-z2ks9\") pod \"crc-debug-g44lj\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.230973 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d975c3f-305f-4a75-9776-5a5c98e567f3-host\") pod \"crc-debug-g44lj\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.332590 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2ks9\" (UniqueName: \"kubernetes.io/projected/6d975c3f-305f-4a75-9776-5a5c98e567f3-kube-api-access-z2ks9\") pod \"crc-debug-g44lj\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.333250 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d975c3f-305f-4a75-9776-5a5c98e567f3-host\") pod \"crc-debug-g44lj\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.333381 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d975c3f-305f-4a75-9776-5a5c98e567f3-host\") pod \"crc-debug-g44lj\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.349368 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2ks9\" (UniqueName: \"kubernetes.io/projected/6d975c3f-305f-4a75-9776-5a5c98e567f3-kube-api-access-z2ks9\") pod \"crc-debug-g44lj\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.426981 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: W0130 15:20:55.455199 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d975c3f_305f_4a75_9776_5a5c98e567f3.slice/crio-4b070602a573fe785bf9994bac800be9fc5273e7b2c0faa1075420ace1133a21 WatchSource:0}: Error finding container 4b070602a573fe785bf9994bac800be9fc5273e7b2c0faa1075420ace1133a21: Status 404 returned error can't find the container with id 4b070602a573fe785bf9994bac800be9fc5273e7b2c0faa1075420ace1133a21 Jan 30 15:20:56 crc kubenswrapper[4793]: I0130 15:20:56.002670 4793 generic.go:334] "Generic (PLEG): container finished" podID="6d975c3f-305f-4a75-9776-5a5c98e567f3" containerID="437d7045fe7a0e2d3b1219fd70c03224ab5b83cded85d2ea40b54b54f24df894" exitCode=0 Jan 30 15:20:56 crc kubenswrapper[4793]: I0130 15:20:56.003030 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-g44lj" event={"ID":"6d975c3f-305f-4a75-9776-5a5c98e567f3","Type":"ContainerDied","Data":"437d7045fe7a0e2d3b1219fd70c03224ab5b83cded85d2ea40b54b54f24df894"} Jan 30 15:20:56 crc kubenswrapper[4793]: I0130 15:20:56.003137 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-g44lj" event={"ID":"6d975c3f-305f-4a75-9776-5a5c98e567f3","Type":"ContainerStarted","Data":"4b070602a573fe785bf9994bac800be9fc5273e7b2c0faa1075420ace1133a21"} Jan 30 15:20:56 crc kubenswrapper[4793]: I0130 15:20:56.043843 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5swb7/crc-debug-g44lj"] Jan 30 15:20:56 crc kubenswrapper[4793]: I0130 15:20:56.051595 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5swb7/crc-debug-g44lj"] Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.106571 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.167011 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2ks9\" (UniqueName: \"kubernetes.io/projected/6d975c3f-305f-4a75-9776-5a5c98e567f3-kube-api-access-z2ks9\") pod \"6d975c3f-305f-4a75-9776-5a5c98e567f3\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.167230 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d975c3f-305f-4a75-9776-5a5c98e567f3-host\") pod \"6d975c3f-305f-4a75-9776-5a5c98e567f3\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.167358 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d975c3f-305f-4a75-9776-5a5c98e567f3-host" (OuterVolumeSpecName: "host") pod "6d975c3f-305f-4a75-9776-5a5c98e567f3" (UID: "6d975c3f-305f-4a75-9776-5a5c98e567f3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.167716 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d975c3f-305f-4a75-9776-5a5c98e567f3-host\") on node \"crc\" DevicePath \"\"" Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.175312 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d975c3f-305f-4a75-9776-5a5c98e567f3-kube-api-access-z2ks9" (OuterVolumeSpecName: "kube-api-access-z2ks9") pod "6d975c3f-305f-4a75-9776-5a5c98e567f3" (UID: "6d975c3f-305f-4a75-9776-5a5c98e567f3"). InnerVolumeSpecName "kube-api-access-z2ks9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.269272 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2ks9\" (UniqueName: \"kubernetes.io/projected/6d975c3f-305f-4a75-9776-5a5c98e567f3-kube-api-access-z2ks9\") on node \"crc\" DevicePath \"\"" Jan 30 15:20:58 crc kubenswrapper[4793]: I0130 15:20:58.023509 4793 scope.go:117] "RemoveContainer" containerID="437d7045fe7a0e2d3b1219fd70c03224ab5b83cded85d2ea40b54b54f24df894" Jan 30 15:20:58 crc kubenswrapper[4793]: I0130 15:20:58.023682 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:58 crc kubenswrapper[4793]: I0130 15:20:58.409122 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d975c3f-305f-4a75-9776-5a5c98e567f3" path="/var/lib/kubelet/pods/6d975c3f-305f-4a75-9776-5a5c98e567f3/volumes" Jan 30 15:21:00 crc kubenswrapper[4793]: I0130 15:21:00.403361 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:21:00 crc kubenswrapper[4793]: E0130 15:21:00.403879 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:21:13 crc kubenswrapper[4793]: I0130 15:21:13.398130 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:21:13 crc kubenswrapper[4793]: E0130 15:21:13.398885 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:21:25 crc kubenswrapper[4793]: I0130 15:21:25.398217 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:21:25 crc kubenswrapper[4793]: E0130 15:21:25.399132 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:21:36 crc kubenswrapper[4793]: I0130 15:21:36.422078 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:21:36 crc kubenswrapper[4793]: E0130 15:21:36.423448 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:21:44 crc kubenswrapper[4793]: I0130 15:21:44.906314 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-577797dd7d-dhrt2_a389d76c-e0de-4b8d-84b2-82aedd050f7f/barbican-api/0.log" Jan 30 15:21:44 crc kubenswrapper[4793]: I0130 15:21:44.988647 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-577797dd7d-dhrt2_a389d76c-e0de-4b8d-84b2-82aedd050f7f/barbican-api-log/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.174554 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6dd7f7f8-htnvl_af929740-592b-4d7f-9c99-061df6882206/barbican-keystone-listener/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.221296 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6dd7f7f8-htnvl_af929740-592b-4d7f-9c99-061df6882206/barbican-keystone-listener-log/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.284085 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d78d76787-7f5jh_653cedf2-2880-49ff-b177-8974b9f0ecdf/barbican-worker/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.393864 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d78d76787-7f5jh_653cedf2-2880-49ff-b177-8974b9f0ecdf/barbican-worker-log/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.539061 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6_2ba6b544-0042-43d7-abe9-bc40439f804b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.679194 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/ceilometer-central-agent/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.739792 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/ceilometer-notification-agent/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.828256 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/proxy-httpd/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.845196 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/sg-core/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.055683 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3105dc9e-c178-4799-a658-044d4d9b8312/cinder-api-log/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.115949 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3105dc9e-c178-4799-a658-044d4d9b8312/cinder-api/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.290033 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_83e26b73-5483-4b6c-88cd-5d794f14ef5a/cinder-scheduler/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.431078 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_83e26b73-5483-4b6c-88cd-5d794f14ef5a/probe/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.500625 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc_260f1ea9-6ba5-40aa-ab56-e95237cb1009/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.688577 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-jchk2_44f4e8fd-4511-4670-944a-e37dfc6238c8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.728451 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-5bm62_b3e8eb28-c303-409b-a89b-b273b2f56fff/init/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.920018 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-5bm62_b3e8eb28-c303-409b-a89b-b273b2f56fff/init/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.022958 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qgztn_f1632f4b-e0e5-4069-a77b-ae4f1911869b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.203545 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-5bm62_b3e8eb28-c303-409b-a89b-b273b2f56fff/dnsmasq-dns/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.307273 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ae7d1df8-4b0f-46f7-85f4-e24fd65a919d/glance-log/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.310865 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ae7d1df8-4b0f-46f7-85f4-e24fd65a919d/glance-httpd/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.487169 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f96d1ae8-18a5-4651-b460-21e9ddb50684/glance-httpd/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.538664 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f96d1ae8-18a5-4651-b460-21e9ddb50684/glance-log/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.867842 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9fc5f8f6-nj7xv_7c37d49c-cbd6-47d6-8f29-51ec6fac2f61/horizon/1.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.896089 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9fc5f8f6-nj7xv_7c37d49c-cbd6-47d6-8f29-51ec6fac2f61/horizon/2.log" Jan 30 15:21:48 crc kubenswrapper[4793]: I0130 15:21:48.313204 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp_ae4f8964-b104-43bb-8356-bb53a9635527/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:48 crc kubenswrapper[4793]: I0130 15:21:48.421305 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lqrxr_1ee9c552-088f-4e61-961e-7062bf6e874b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:48 crc kubenswrapper[4793]: I0130 15:21:48.446789 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9fc5f8f6-nj7xv_7c37d49c-cbd6-47d6-8f29-51ec6fac2f61/horizon-log/0.log" Jan 30 15:21:48 crc kubenswrapper[4793]: I0130 15:21:48.577344 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496421-n28p5_617a2857-c4b0-4558-9834-551a98cd534f/keystone-cron/0.log" Jan 30 15:21:48 crc kubenswrapper[4793]: I0130 15:21:48.883847 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a3625667-be35-4d81-84f9-e00593f1c627/kube-state-metrics/0.log" Jan 30 15:21:49 crc kubenswrapper[4793]: I0130 15:21:49.216979 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2_96926233-9ce4-4a0b-bab4-d0c4fa90389b/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:49 crc kubenswrapper[4793]: I0130 15:21:49.231145 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-d689db86f-zslsz_0ed57c3d-4992-4cfa-8655-1587b5897df6/keystone-api/0.log" Jan 30 15:21:49 crc kubenswrapper[4793]: I0130 15:21:49.398978 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:21:49 crc kubenswrapper[4793]: E0130 15:21:49.399236 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:21:50 crc kubenswrapper[4793]: I0130 15:21:50.206884 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk_92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:50 crc kubenswrapper[4793]: I0130 15:21:50.413694 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-668ffd44cc-lhns4_d9f34138-4dce-415b-ad20-cf0ba588f012/neutron-httpd/0.log" Jan 30 15:21:50 crc kubenswrapper[4793]: I0130 15:21:50.721863 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-668ffd44cc-lhns4_d9f34138-4dce-415b-ad20-cf0ba588f012/neutron-api/0.log" Jan 30 15:21:51 crc kubenswrapper[4793]: I0130 15:21:51.501322 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7/nova-cell0-conductor-conductor/0.log" Jan 30 15:21:51 crc kubenswrapper[4793]: I0130 15:21:51.958281 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d2acd609-26c0-4b98-861f-a8b12fcd07bf/nova-cell1-conductor-conductor/0.log" Jan 30 15:21:52 crc kubenswrapper[4793]: I0130 15:21:52.218997 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b4991f7-e6e6-4dfd-a75b-25a7506591e1/nova-api-log/0.log" Jan 30 15:21:52 crc kubenswrapper[4793]: I0130 15:21:52.287006 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_abaabb74-42dd-40b6-9cb7-69db46f235df/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 15:21:52 crc kubenswrapper[4793]: I0130 15:21:52.607865 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-sk8t8_dfc4d2ba-0414-4f1e-8733-a75d39218ef8/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:52 crc kubenswrapper[4793]: I0130 15:21:52.671986 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_02223b96-2b8b-4d32-b7ba-9cb517e03f13/nova-metadata-log/0.log" Jan 30 15:21:52 crc kubenswrapper[4793]: I0130 15:21:52.935598 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b4991f7-e6e6-4dfd-a75b-25a7506591e1/nova-api-api/0.log" Jan 30 15:21:53 crc kubenswrapper[4793]: I0130 15:21:53.599551 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41e0025f-6abc-4554-b7a0-c132607aec86/mysql-bootstrap/0.log" Jan 30 15:21:53 crc kubenswrapper[4793]: I0130 15:21:53.840855 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41e0025f-6abc-4554-b7a0-c132607aec86/mysql-bootstrap/0.log" Jan 30 15:21:53 crc kubenswrapper[4793]: I0130 15:21:53.853661 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41e0025f-6abc-4554-b7a0-c132607aec86/galera/0.log" Jan 30 15:21:54 crc kubenswrapper[4793]: I0130 15:21:54.146828 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f45b0069-4cb7-4dfd-ac2d-1473cacbde1f/mysql-bootstrap/0.log" Jan 30 15:21:54 crc kubenswrapper[4793]: I0130 15:21:54.310421 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9e04e820-112a-4afa-b908-f9b8be3e9e7c/nova-scheduler-scheduler/0.log" Jan 30 15:21:54 crc kubenswrapper[4793]: I0130 15:21:54.575412 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f45b0069-4cb7-4dfd-ac2d-1473cacbde1f/mysql-bootstrap/0.log" Jan 30 15:21:54 crc kubenswrapper[4793]: I0130 15:21:54.657676 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f45b0069-4cb7-4dfd-ac2d-1473cacbde1f/galera/0.log" Jan 30 15:21:54 crc kubenswrapper[4793]: I0130 15:21:54.948177 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7/openstackclient/0.log" Jan 30 15:21:54 crc kubenswrapper[4793]: I0130 15:21:54.962736 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-45fd5_230700ff-5087-4d0d-9d93-90b597d2ef72/ovn-controller/0.log" Jan 30 15:21:55 crc kubenswrapper[4793]: I0130 15:21:55.319549 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vx7z5_2eaf3033-e5f4-48bc-bdee-b7d97e57e765/openstack-network-exporter/0.log" Jan 30 15:21:55 crc kubenswrapper[4793]: I0130 15:21:55.636707 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovsdb-server-init/0.log" Jan 30 15:21:55 crc kubenswrapper[4793]: I0130 15:21:55.891099 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovs-vswitchd/0.log" Jan 30 15:21:55 crc kubenswrapper[4793]: I0130 15:21:55.900939 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_02223b96-2b8b-4d32-b7ba-9cb517e03f13/nova-metadata-metadata/0.log" Jan 30 15:21:55 crc kubenswrapper[4793]: I0130 15:21:55.903428 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovsdb-server/0.log" Jan 30 15:21:55 crc kubenswrapper[4793]: I0130 15:21:55.946030 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovsdb-server-init/0.log" Jan 30 15:21:56 crc kubenswrapper[4793]: I0130 15:21:56.200568 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-45sz7_dbd66148-cdd0-4e92-9601-3ef1576a5d3f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:56 crc kubenswrapper[4793]: I0130 15:21:56.361615 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_270527bd-015e-4904-8916-07993e081611/openstack-network-exporter/0.log" Jan 30 15:21:56 crc kubenswrapper[4793]: I0130 15:21:56.558163 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_270527bd-015e-4904-8916-07993e081611/ovn-northd/0.log" Jan 30 15:21:56 crc kubenswrapper[4793]: I0130 15:21:56.644460 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bfa8998b-ee3a-4aea-80e8-c59620a5308a/openstack-network-exporter/0.log" Jan 30 15:21:56 crc kubenswrapper[4793]: I0130 15:21:56.707363 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bfa8998b-ee3a-4aea-80e8-c59620a5308a/ovsdbserver-nb/0.log" Jan 30 15:21:57 crc kubenswrapper[4793]: I0130 15:21:57.168701 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_285be7d6-1f03-43af-8087-46ba257183ec/ovsdbserver-sb/0.log" Jan 30 15:21:57 crc kubenswrapper[4793]: I0130 15:21:57.265890 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_285be7d6-1f03-43af-8087-46ba257183ec/openstack-network-exporter/0.log" Jan 30 15:21:57 crc kubenswrapper[4793]: I0130 15:21:57.713419 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b0247ba-adfd-4195-bf23-91478001fed7/setup-container/0.log" Jan 30 15:21:57 crc kubenswrapper[4793]: I0130 15:21:57.793795 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65f95549b8-wtpxl_57bfc822-1d30-49bc-a077-686b68e9c1e6/placement-api/0.log" Jan 30 15:21:57 crc kubenswrapper[4793]: I0130 15:21:57.835403 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65f95549b8-wtpxl_57bfc822-1d30-49bc-a077-686b68e9c1e6/placement-log/0.log" Jan 30 15:21:57 crc kubenswrapper[4793]: I0130 15:21:57.915517 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b0247ba-adfd-4195-bf23-91478001fed7/setup-container/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.072001 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b0247ba-adfd-4195-bf23-91478001fed7/rabbitmq/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.136085 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7ffc0461-9589-45f5-a656-85cc01de58ed/setup-container/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.470803 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7ffc0461-9589-45f5-a656-85cc01de58ed/rabbitmq/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.521775 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7ffc0461-9589-45f5-a656-85cc01de58ed/setup-container/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.550636 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_89e99d15-97ad-4ac5-ba68-82ef88460222/memcached/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.551420 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7_0538b501-a861-4302-b26e-f5cfb17ed62a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.796509 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-t7bl5_b89c70f6-dabd-4984-8f21-235a9ab2f307/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.849498 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8_03127c65-edbf-41bd-9543-35ae0eddbff6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.031556 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-j5q58_7915ec77-ca16-4f23-a367-42b525c80284/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.032235 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-nlncv_3cad1dbc-effe-48d8-af45-df0a45e16783/ssh-known-hosts-edpm-deployment/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.287485 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7767cf976c-8m6hn_de3851c3-345e-41a1-ad9e-ee3f4e357d85/proxy-httpd/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.306390 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7767cf976c-8m6hn_de3851c3-345e-41a1-ad9e-ee3f4e357d85/proxy-server/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.433466 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-q459t_50011731-846f-4e86-8664-f9c797dc64ed/swift-ring-rebalance/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.524794 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-auditor/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.560867 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-reaper/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.709933 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-server/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.756313 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-replicator/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.827167 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-replicator/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.859699 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-auditor/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.914946 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-server/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.988834 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-updater/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.096353 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-auditor/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.138966 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-server/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.178629 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-expirer/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.205068 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-replicator/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.255698 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-updater/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.378526 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/rsync/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.448568 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/swift-recon-cron/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.740841 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb_8b1317e1-63f1-4b06-aa31-5df5459c6ce6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.907568 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_4bf53e2d-d024-4526-ada2-0ee6b461babb/tempest-tests-tempest-tests-runner/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.995943 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8de9d25e-7ca7-4338-a64e-ed95f7bd9de9/test-operator-logs-container/0.log" Jan 30 15:22:01 crc kubenswrapper[4793]: I0130 15:22:01.077167 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt_dcc6f491-d722-48e4-bcb8-8a9de7603786/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:22:01 crc kubenswrapper[4793]: I0130 15:22:01.398925 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:22:01 crc kubenswrapper[4793]: E0130 15:22:01.399204 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:22:13 crc kubenswrapper[4793]: I0130 15:22:13.397938 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:22:13 crc kubenswrapper[4793]: E0130 15:22:13.398874 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.314104 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4r5cl"] Jan 30 15:22:19 crc kubenswrapper[4793]: E0130 15:22:19.315752 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d975c3f-305f-4a75-9776-5a5c98e567f3" containerName="container-00" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.315828 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d975c3f-305f-4a75-9776-5a5c98e567f3" containerName="container-00" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.316111 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d975c3f-305f-4a75-9776-5a5c98e567f3" containerName="container-00" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.317658 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.329342 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r5cl"] Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.428574 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdjxn\" (UniqueName: \"kubernetes.io/projected/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-kube-api-access-sdjxn\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.429040 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-catalog-content\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.429155 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-utilities\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.531826 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-catalog-content\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.531967 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-utilities\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.532146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdjxn\" (UniqueName: \"kubernetes.io/projected/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-kube-api-access-sdjxn\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.532762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-utilities\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.532950 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-catalog-content\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.574589 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdjxn\" (UniqueName: \"kubernetes.io/projected/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-kube-api-access-sdjxn\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.635222 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:20 crc kubenswrapper[4793]: I0130 15:22:20.183235 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r5cl"] Jan 30 15:22:20 crc kubenswrapper[4793]: I0130 15:22:20.200961 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerStarted","Data":"4b6786f8facf2d6a7b0627908cca7f765498a995e412d74b8f28cd406462599b"} Jan 30 15:22:21 crc kubenswrapper[4793]: I0130 15:22:21.214201 4793 generic.go:334] "Generic (PLEG): container finished" podID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerID="03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911" exitCode=0 Jan 30 15:22:21 crc kubenswrapper[4793]: I0130 15:22:21.214809 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerDied","Data":"03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911"} Jan 30 15:22:23 crc kubenswrapper[4793]: I0130 15:22:23.244400 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerStarted","Data":"c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9"} Jan 30 15:22:24 crc kubenswrapper[4793]: I0130 15:22:24.256839 4793 generic.go:334] "Generic (PLEG): container finished" podID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerID="c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9" exitCode=0 Jan 30 15:22:24 crc kubenswrapper[4793]: I0130 15:22:24.257070 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerDied","Data":"c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9"} Jan 30 15:22:25 crc kubenswrapper[4793]: I0130 15:22:25.287431 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerStarted","Data":"73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8"} Jan 30 15:22:25 crc kubenswrapper[4793]: I0130 15:22:25.318216 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4r5cl" podStartSLOduration=2.7019189150000003 podStartE2EDuration="6.318197609s" podCreationTimestamp="2026-01-30 15:22:19 +0000 UTC" firstStartedPulling="2026-01-30 15:22:21.229092515 +0000 UTC m=+5951.930441006" lastFinishedPulling="2026-01-30 15:22:24.845371209 +0000 UTC m=+5955.546719700" observedRunningTime="2026-01-30 15:22:25.314431045 +0000 UTC m=+5956.015779566" watchObservedRunningTime="2026-01-30 15:22:25.318197609 +0000 UTC m=+5956.019546100" Jan 30 15:22:28 crc kubenswrapper[4793]: I0130 15:22:28.398724 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:22:28 crc kubenswrapper[4793]: E0130 15:22:28.399201 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:22:28 crc kubenswrapper[4793]: I0130 15:22:28.536900 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-8bg6c_ec981da4-a3ba-4e4e-a0eb-2168ab79fe77/manager/0.log" Jan 30 15:22:28 crc kubenswrapper[4793]: I0130 15:22:28.587672 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/util/0.log" Jan 30 15:22:28 crc kubenswrapper[4793]: I0130 15:22:28.819135 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/pull/0.log" Jan 30 15:22:28 crc kubenswrapper[4793]: I0130 15:22:28.824491 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/pull/0.log" Jan 30 15:22:28 crc kubenswrapper[4793]: I0130 15:22:28.870895 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/util/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.049463 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/util/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.078821 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/extract/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.180280 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/pull/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.309443 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-hjpkr_6f991e04-2db3-4b32-bc83-8bbce4ce7a08/manager/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.309567 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-9kwwr_8835e5d9-c37d-4744-95cb-c56c10a58647/manager/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.635361 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.635678 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.636285 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-k4tz9_8d24cd33-2902-424a-8ffc-76b1e4c2f482/manager/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.683926 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.701077 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-g5848_1d859404-a29c-46c9-b66a-fed5ff0b13f0/manager/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.843663 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-m4q78_710c57e4-a09e-4db1-a03b-13db05085d41/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.109967 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-v77jx_7c34e714-0f18-4e41-ab9c-1dfe4859e644/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.238948 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-khfs7_97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.387370 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.411366 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-82cvq_bdcd04f7-09fa-4b1b-8b99-3de61a28a337/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.503350 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-9ftxd_ce9be14f-8255-421e-91b4-a30fc5482ff4/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.663460 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-n29l5_fa88d14c-0581-439c-9da1-f1123e41a65a/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.807481 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-x6pk6_05415bc7-22dc-4b15-a047-6ed62755638d/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.977513 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-vtx9d_31ca6ac1-d2da-4325-baa4-e18fc3514721/manager/0.log" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.051284 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-5nsr4_53576ec8-2f6d-4781-8906-726529cc6049/manager/0.log" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.159694 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs_e446e97c-6e9f-4dc2-b5fd-fb63451fd326/manager/0.log" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.413042 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-977cfdb67-sp4rd_2cec3782-823b-4ddf-909a-e773203cd721/operator/0.log" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.705751 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nlmdf"] Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.707778 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.726998 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlmdf"] Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.784455 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-x56zx_e3b6e703-4540-4739-87cd-8699d4e04903/registry-server/0.log" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.789003 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42qfb\" (UniqueName: \"kubernetes.io/projected/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-kube-api-access-42qfb\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.789502 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-catalog-content\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.789921 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-utilities\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.892434 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42qfb\" (UniqueName: \"kubernetes.io/projected/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-kube-api-access-42qfb\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.893785 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-catalog-content\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.893894 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-utilities\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.894661 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-catalog-content\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.897444 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-utilities\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.930836 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42qfb\" (UniqueName: \"kubernetes.io/projected/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-kube-api-access-42qfb\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:32 crc kubenswrapper[4793]: I0130 15:22:32.021370 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-4ml88_6231ed92-57a8-4c48-9c75-e916940b22ea/manager/0.log" Jan 30 15:22:32 crc kubenswrapper[4793]: I0130 15:22:32.076967 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:32 crc kubenswrapper[4793]: I0130 15:22:32.520905 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-27flx_02b8e60c-3514-4d72-bde6-5af374a926b1/manager/0.log" Jan 30 15:22:32 crc kubenswrapper[4793]: I0130 15:22:32.679459 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlmdf"] Jan 30 15:22:32 crc kubenswrapper[4793]: I0130 15:22:32.785089 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nb4g2_2aae677d-830b-44b8-a792-3d0b527aee89/operator/0.log" Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.019670 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-vxhpt_3eb94c51-d506-4273-898b-dba537cabea6/manager/0.log" Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.027268 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75c5857d49-pm446_e9854850-e645-4364-a471-bef994f8536c/manager/0.log" Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.225292 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-tv5vr_6b21b0ca-d506-4b1b-b6e1-06e2a96ae033/manager/0.log" Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.355877 4793 generic.go:334] "Generic (PLEG): container finished" podID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerID="96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997" exitCode=0 Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.355925 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerDied","Data":"96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997"} Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.355952 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerStarted","Data":"8372c971f9f6c2985247616cba22145cd94668d2cdaaebf62f2b83a40bacf8bb"} Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.454103 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-qb5xp_5e215cef-de14-424d-9028-a48bad979192/manager/0.log" Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.826679 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-btjpp_f65e9448-ee4e-4f22-9bd7-ecf650cb36b5/manager/0.log" Jan 30 15:22:34 crc kubenswrapper[4793]: I0130 15:22:34.365208 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerStarted","Data":"4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f"} Jan 30 15:22:34 crc kubenswrapper[4793]: I0130 15:22:34.490638 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r5cl"] Jan 30 15:22:34 crc kubenswrapper[4793]: I0130 15:22:34.490948 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4r5cl" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="registry-server" containerID="cri-o://73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8" gracePeriod=2 Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.017384 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.160467 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-utilities\") pod \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.160531 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-catalog-content\") pod \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.160566 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdjxn\" (UniqueName: \"kubernetes.io/projected/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-kube-api-access-sdjxn\") pod \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.161116 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-utilities" (OuterVolumeSpecName: "utilities") pod "a9f9d306-0d7d-4586-a327-f32c5cfe12aa" (UID: "a9f9d306-0d7d-4586-a327-f32c5cfe12aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.181293 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-kube-api-access-sdjxn" (OuterVolumeSpecName: "kube-api-access-sdjxn") pod "a9f9d306-0d7d-4586-a327-f32c5cfe12aa" (UID: "a9f9d306-0d7d-4586-a327-f32c5cfe12aa"). InnerVolumeSpecName "kube-api-access-sdjxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.186518 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9f9d306-0d7d-4586-a327-f32c5cfe12aa" (UID: "a9f9d306-0d7d-4586-a327-f32c5cfe12aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.263227 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.263264 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.263279 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdjxn\" (UniqueName: \"kubernetes.io/projected/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-kube-api-access-sdjxn\") on node \"crc\" DevicePath \"\"" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.375949 4793 generic.go:334] "Generic (PLEG): container finished" podID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerID="73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8" exitCode=0 Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.376014 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.376041 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerDied","Data":"73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8"} Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.376102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerDied","Data":"4b6786f8facf2d6a7b0627908cca7f765498a995e412d74b8f28cd406462599b"} Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.376121 4793 scope.go:117] "RemoveContainer" containerID="73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.401171 4793 scope.go:117] "RemoveContainer" containerID="c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.432501 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r5cl"] Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.445209 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r5cl"] Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.450827 4793 scope.go:117] "RemoveContainer" containerID="03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.499847 4793 scope.go:117] "RemoveContainer" containerID="73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8" Jan 30 15:22:35 crc kubenswrapper[4793]: E0130 15:22:35.502627 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8\": container with ID starting with 73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8 not found: ID does not exist" containerID="73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.502672 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8"} err="failed to get container status \"73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8\": rpc error: code = NotFound desc = could not find container \"73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8\": container with ID starting with 73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8 not found: ID does not exist" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.502699 4793 scope.go:117] "RemoveContainer" containerID="c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9" Jan 30 15:22:35 crc kubenswrapper[4793]: E0130 15:22:35.503444 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9\": container with ID starting with c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9 not found: ID does not exist" containerID="c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.503491 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9"} err="failed to get container status \"c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9\": rpc error: code = NotFound desc = could not find container \"c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9\": container with ID starting with c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9 not found: ID does not exist" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.503521 4793 scope.go:117] "RemoveContainer" containerID="03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911" Jan 30 15:22:35 crc kubenswrapper[4793]: E0130 15:22:35.504330 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911\": container with ID starting with 03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911 not found: ID does not exist" containerID="03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.504368 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911"} err="failed to get container status \"03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911\": rpc error: code = NotFound desc = could not find container \"03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911\": container with ID starting with 03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911 not found: ID does not exist" Jan 30 15:22:36 crc kubenswrapper[4793]: I0130 15:22:36.409683 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" path="/var/lib/kubelet/pods/a9f9d306-0d7d-4586-a327-f32c5cfe12aa/volumes" Jan 30 15:22:42 crc kubenswrapper[4793]: I0130 15:22:42.398235 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:22:42 crc kubenswrapper[4793]: E0130 15:22:42.399888 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:22:45 crc kubenswrapper[4793]: I0130 15:22:45.470872 4793 generic.go:334] "Generic (PLEG): container finished" podID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerID="4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f" exitCode=0 Jan 30 15:22:45 crc kubenswrapper[4793]: I0130 15:22:45.471093 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerDied","Data":"4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f"} Jan 30 15:22:47 crc kubenswrapper[4793]: I0130 15:22:47.491926 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerStarted","Data":"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd"} Jan 30 15:22:47 crc kubenswrapper[4793]: I0130 15:22:47.525827 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nlmdf" podStartSLOduration=3.179543475 podStartE2EDuration="16.525809867s" podCreationTimestamp="2026-01-30 15:22:31 +0000 UTC" firstStartedPulling="2026-01-30 15:22:33.357509003 +0000 UTC m=+5964.058857494" lastFinishedPulling="2026-01-30 15:22:46.703775395 +0000 UTC m=+5977.405123886" observedRunningTime="2026-01-30 15:22:47.517535253 +0000 UTC m=+5978.218883744" watchObservedRunningTime="2026-01-30 15:22:47.525809867 +0000 UTC m=+5978.227158358" Jan 30 15:22:52 crc kubenswrapper[4793]: I0130 15:22:52.079888 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:52 crc kubenswrapper[4793]: I0130 15:22:52.080414 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:53 crc kubenswrapper[4793]: I0130 15:22:53.142692 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:22:53 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:22:53 crc kubenswrapper[4793]: > Jan 30 15:22:53 crc kubenswrapper[4793]: I0130 15:22:53.398652 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:22:53 crc kubenswrapper[4793]: E0130 15:22:53.398967 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:22:55 crc kubenswrapper[4793]: I0130 15:22:55.462475 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vqxml_10c05bcf-ffb2-4175-b323-067804ea3391/control-plane-machine-set-operator/0.log" Jan 30 15:22:55 crc kubenswrapper[4793]: I0130 15:22:55.504325 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vqxml_10c05bcf-ffb2-4175-b323-067804ea3391/control-plane-machine-set-operator/1.log" Jan 30 15:22:55 crc kubenswrapper[4793]: I0130 15:22:55.765564 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-56g7n_afa7929d-37a8-4fa2-9733-158cab1c40ec/kube-rbac-proxy/0.log" Jan 30 15:22:55 crc kubenswrapper[4793]: I0130 15:22:55.793911 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-56g7n_afa7929d-37a8-4fa2-9733-158cab1c40ec/machine-api-operator/0.log" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.631719 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d86tm"] Jan 30 15:22:56 crc kubenswrapper[4793]: E0130 15:22:56.632497 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="extract-content" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.632806 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="extract-content" Jan 30 15:22:56 crc kubenswrapper[4793]: E0130 15:22:56.632827 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="extract-utilities" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.632839 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="extract-utilities" Jan 30 15:22:56 crc kubenswrapper[4793]: E0130 15:22:56.632852 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="registry-server" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.632860 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="registry-server" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.633196 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="registry-server" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.634913 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.646773 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d86tm"] Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.710631 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-utilities\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.710766 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-catalog-content\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.710806 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578xc\" (UniqueName: \"kubernetes.io/projected/c35934a1-325a-4231-8dde-9357aab2af3f-kube-api-access-578xc\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.812519 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-578xc\" (UniqueName: \"kubernetes.io/projected/c35934a1-325a-4231-8dde-9357aab2af3f-kube-api-access-578xc\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.812704 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-utilities\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.812863 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-catalog-content\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.813352 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-utilities\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.813352 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-catalog-content\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.838628 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-578xc\" (UniqueName: \"kubernetes.io/projected/c35934a1-325a-4231-8dde-9357aab2af3f-kube-api-access-578xc\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.954657 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:57 crc kubenswrapper[4793]: I0130 15:22:57.560688 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d86tm"] Jan 30 15:22:57 crc kubenswrapper[4793]: I0130 15:22:57.591597 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerStarted","Data":"67b39464bec1710449607f7c3521e7192c615bc0f3447d2003996ee508c4b158"} Jan 30 15:22:58 crc kubenswrapper[4793]: I0130 15:22:58.603274 4793 generic.go:334] "Generic (PLEG): container finished" podID="c35934a1-325a-4231-8dde-9357aab2af3f" containerID="612fafe439052cb8b36014e5e1fdcf820fd924ff9c4da2d5454871cca09f6085" exitCode=0 Jan 30 15:22:58 crc kubenswrapper[4793]: I0130 15:22:58.603347 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerDied","Data":"612fafe439052cb8b36014e5e1fdcf820fd924ff9c4da2d5454871cca09f6085"} Jan 30 15:23:00 crc kubenswrapper[4793]: I0130 15:23:00.628183 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerStarted","Data":"b839195821be83a9e7374cf15a6233c62012a4b46d47003811c0c0bc8e77ddd9"} Jan 30 15:23:03 crc kubenswrapper[4793]: I0130 15:23:03.125828 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:03 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:03 crc kubenswrapper[4793]: > Jan 30 15:23:07 crc kubenswrapper[4793]: I0130 15:23:07.834291 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 15:23:08 crc kubenswrapper[4793]: I0130 15:23:08.399411 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:23:08 crc kubenswrapper[4793]: E0130 15:23:08.399630 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:23:10 crc kubenswrapper[4793]: I0130 15:23:10.493894 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-lm7l8_e88efb4a-1489-4847-adb4-230a8b5db6ef/cert-manager-webhook/0.log" Jan 30 15:23:12 crc kubenswrapper[4793]: I0130 15:23:12.833580 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 15:23:13 crc kubenswrapper[4793]: I0130 15:23:13.125573 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:13 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:13 crc kubenswrapper[4793]: > Jan 30 15:23:16 crc kubenswrapper[4793]: I0130 15:23:16.466892 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-tzjhq_8fd78cec-1c0f-427e-8224-4021da0ede3c/cert-manager-cainjector/0.log" Jan 30 15:23:16 crc kubenswrapper[4793]: I0130 15:23:16.649264 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-26t5l_1b680507-f432-4019-b372-d9452d89aa97/cert-manager-controller/0.log" Jan 30 15:23:16 crc kubenswrapper[4793]: I0130 15:23:16.880917 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d" containerName="ceilometer-central-agent" probeResult="failure" output=< Jan 30 15:23:16 crc kubenswrapper[4793]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 30 15:23:16 crc kubenswrapper[4793]: > Jan 30 15:23:16 crc kubenswrapper[4793]: I0130 15:23:16.881020 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 30 15:23:16 crc kubenswrapper[4793]: I0130 15:23:16.881962 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"67cd78805cfd71182011eb15b3b8e8abf6d3edb3e63f79fbcc6bba28ee33409f"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 30 15:23:16 crc kubenswrapper[4793]: I0130 15:23:16.882099 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d" containerName="ceilometer-central-agent" containerID="cri-o://67cd78805cfd71182011eb15b3b8e8abf6d3edb3e63f79fbcc6bba28ee33409f" gracePeriod=30 Jan 30 15:23:18 crc kubenswrapper[4793]: I0130 15:23:18.773789 4793 generic.go:334] "Generic (PLEG): container finished" podID="4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d" containerID="67cd78805cfd71182011eb15b3b8e8abf6d3edb3e63f79fbcc6bba28ee33409f" exitCode=0 Jan 30 15:23:18 crc kubenswrapper[4793]: I0130 15:23:18.773861 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerDied","Data":"67cd78805cfd71182011eb15b3b8e8abf6d3edb3e63f79fbcc6bba28ee33409f"} Jan 30 15:23:19 crc kubenswrapper[4793]: I0130 15:23:19.662650 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:23:19 crc kubenswrapper[4793]: E0130 15:23:19.662964 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:23:20 crc kubenswrapper[4793]: I0130 15:23:20.235657 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 15:23:23 crc kubenswrapper[4793]: I0130 15:23:23.127307 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:23 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:23 crc kubenswrapper[4793]: > Jan 30 15:23:23 crc kubenswrapper[4793]: I0130 15:23:23.819271 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerStarted","Data":"e0afffecc4a1d26ccd13cb484429754b46c22d6988a46071be25b6f7627edd50"} Jan 30 15:23:24 crc kubenswrapper[4793]: I0130 15:23:24.828525 4793 generic.go:334] "Generic (PLEG): container finished" podID="c35934a1-325a-4231-8dde-9357aab2af3f" containerID="b839195821be83a9e7374cf15a6233c62012a4b46d47003811c0c0bc8e77ddd9" exitCode=0 Jan 30 15:23:24 crc kubenswrapper[4793]: I0130 15:23:24.828579 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerDied","Data":"b839195821be83a9e7374cf15a6233c62012a4b46d47003811c0c0bc8e77ddd9"} Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.198837 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-kc5ft_5df01042-63fe-458a-b71d-d1f9bdf9ea66/nmstate-console-plugin/0.log" Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.370901 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-dh9db_e635e428-77d8-44fb-baa4-1af4bd603c10/nmstate-handler/0.log" Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.441057 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2gwr6_1a7bdce5-b625-40ce-b674-a834fcd178a8/kube-rbac-proxy/0.log" Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.469463 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2gwr6_1a7bdce5-b625-40ce-b674-a834fcd178a8/nmstate-metrics/0.log" Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.810313 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-9bsps_1f691ecb-c128-4332-a7ab-c4e173490f50/nmstate-operator/0.log" Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.813461 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-hw489_68bcadc4-02c3-44c0-a252-0606ff1f0a09/nmstate-webhook/0.log" Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.840593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerStarted","Data":"64e3e8d3bc5b50d9a440eccb4f185891b26096515466621e198a14f5182466bd"} Jan 30 15:23:26 crc kubenswrapper[4793]: I0130 15:23:26.870250 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d86tm" podStartSLOduration=4.071586866 podStartE2EDuration="30.870228055s" podCreationTimestamp="2026-01-30 15:22:56 +0000 UTC" firstStartedPulling="2026-01-30 15:22:58.605664051 +0000 UTC m=+5989.307012542" lastFinishedPulling="2026-01-30 15:23:25.40430524 +0000 UTC m=+6016.105653731" observedRunningTime="2026-01-30 15:23:26.866552285 +0000 UTC m=+6017.567900776" watchObservedRunningTime="2026-01-30 15:23:26.870228055 +0000 UTC m=+6017.571576546" Jan 30 15:23:26 crc kubenswrapper[4793]: I0130 15:23:26.955660 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:23:26 crc kubenswrapper[4793]: I0130 15:23:26.955703 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:23:28 crc kubenswrapper[4793]: I0130 15:23:28.013874 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-d86tm" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:28 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:28 crc kubenswrapper[4793]: > Jan 30 15:23:33 crc kubenswrapper[4793]: I0130 15:23:33.139517 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:33 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:33 crc kubenswrapper[4793]: > Jan 30 15:23:34 crc kubenswrapper[4793]: I0130 15:23:34.398815 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:23:34 crc kubenswrapper[4793]: E0130 15:23:34.399182 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:23:37 crc kubenswrapper[4793]: I0130 15:23:37.005970 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:23:37 crc kubenswrapper[4793]: I0130 15:23:37.066456 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:23:38 crc kubenswrapper[4793]: I0130 15:23:38.170961 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d86tm"] Jan 30 15:23:38 crc kubenswrapper[4793]: I0130 15:23:38.958723 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d86tm" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="registry-server" containerID="cri-o://64e3e8d3bc5b50d9a440eccb4f185891b26096515466621e198a14f5182466bd" gracePeriod=2 Jan 30 15:23:39 crc kubenswrapper[4793]: I0130 15:23:39.972791 4793 generic.go:334] "Generic (PLEG): container finished" podID="c35934a1-325a-4231-8dde-9357aab2af3f" containerID="64e3e8d3bc5b50d9a440eccb4f185891b26096515466621e198a14f5182466bd" exitCode=0 Jan 30 15:23:39 crc kubenswrapper[4793]: I0130 15:23:39.973066 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerDied","Data":"64e3e8d3bc5b50d9a440eccb4f185891b26096515466621e198a14f5182466bd"} Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.128371 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.312720 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-utilities\") pod \"c35934a1-325a-4231-8dde-9357aab2af3f\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.312880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-catalog-content\") pod \"c35934a1-325a-4231-8dde-9357aab2af3f\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.312956 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-578xc\" (UniqueName: \"kubernetes.io/projected/c35934a1-325a-4231-8dde-9357aab2af3f-kube-api-access-578xc\") pod \"c35934a1-325a-4231-8dde-9357aab2af3f\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.313835 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-utilities" (OuterVolumeSpecName: "utilities") pod "c35934a1-325a-4231-8dde-9357aab2af3f" (UID: "c35934a1-325a-4231-8dde-9357aab2af3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.317384 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c35934a1-325a-4231-8dde-9357aab2af3f-kube-api-access-578xc" (OuterVolumeSpecName: "kube-api-access-578xc") pod "c35934a1-325a-4231-8dde-9357aab2af3f" (UID: "c35934a1-325a-4231-8dde-9357aab2af3f"). InnerVolumeSpecName "kube-api-access-578xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.367301 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c35934a1-325a-4231-8dde-9357aab2af3f" (UID: "c35934a1-325a-4231-8dde-9357aab2af3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.415296 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-578xc\" (UniqueName: \"kubernetes.io/projected/c35934a1-325a-4231-8dde-9357aab2af3f-kube-api-access-578xc\") on node \"crc\" DevicePath \"\"" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.415545 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.415613 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.984300 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerDied","Data":"67b39464bec1710449607f7c3521e7192c615bc0f3447d2003996ee508c4b158"} Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.984544 4793 scope.go:117] "RemoveContainer" containerID="64e3e8d3bc5b50d9a440eccb4f185891b26096515466621e198a14f5182466bd" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.984354 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:23:41 crc kubenswrapper[4793]: I0130 15:23:41.009821 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d86tm"] Jan 30 15:23:41 crc kubenswrapper[4793]: I0130 15:23:41.020622 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d86tm"] Jan 30 15:23:41 crc kubenswrapper[4793]: I0130 15:23:41.024863 4793 scope.go:117] "RemoveContainer" containerID="b839195821be83a9e7374cf15a6233c62012a4b46d47003811c0c0bc8e77ddd9" Jan 30 15:23:41 crc kubenswrapper[4793]: I0130 15:23:41.063643 4793 scope.go:117] "RemoveContainer" containerID="612fafe439052cb8b36014e5e1fdcf820fd924ff9c4da2d5454871cca09f6085" Jan 30 15:23:42 crc kubenswrapper[4793]: I0130 15:23:42.413755 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" path="/var/lib/kubelet/pods/c35934a1-325a-4231-8dde-9357aab2af3f/volumes" Jan 30 15:23:43 crc kubenswrapper[4793]: I0130 15:23:43.135635 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:43 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:43 crc kubenswrapper[4793]: > Jan 30 15:23:46 crc kubenswrapper[4793]: I0130 15:23:46.398670 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:23:46 crc kubenswrapper[4793]: E0130 15:23:46.399145 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:23:53 crc kubenswrapper[4793]: I0130 15:23:53.131998 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:53 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:53 crc kubenswrapper[4793]: > Jan 30 15:23:57 crc kubenswrapper[4793]: I0130 15:23:57.424748 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7nlfd_34253a93-968b-47e2-aa0d-43ddb72f29f5/kube-rbac-proxy/0.log" Jan 30 15:23:57 crc kubenswrapper[4793]: I0130 15:23:57.548748 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7nlfd_34253a93-968b-47e2-aa0d-43ddb72f29f5/controller/0.log" Jan 30 15:23:57 crc kubenswrapper[4793]: I0130 15:23:57.710315 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:23:57 crc kubenswrapper[4793]: I0130 15:23:57.863982 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:23:57 crc kubenswrapper[4793]: I0130 15:23:57.904407 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:23:57 crc kubenswrapper[4793]: I0130 15:23:57.958894 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.006918 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.247701 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.255975 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.293865 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.300360 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.470744 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.474916 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.521531 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.580856 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/controller/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.719916 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/kube-rbac-proxy/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.740185 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/frr-metrics/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.834549 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/kube-rbac-proxy-frr/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.960957 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/reloader/0.log" Jan 30 15:23:59 crc kubenswrapper[4793]: I0130 15:23:59.179763 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-4p6gx_e5a76649-d081-4224-baca-095ca1ffadfd/frr-k8s-webhook-server/0.log" Jan 30 15:23:59 crc kubenswrapper[4793]: I0130 15:23:59.437089 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7fbd4d697c-ndglw_75266e51-59ee-432d-b56a-ba972e5ff25b/manager/0.log" Jan 30 15:23:59 crc kubenswrapper[4793]: I0130 15:23:59.564817 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6446fc49bd-rzbbm_45949f1b-1075-4d7f-9007-8525e0364a55/webhook-server/0.log" Jan 30 15:23:59 crc kubenswrapper[4793]: I0130 15:23:59.896251 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g9hvr_519ea47c-0d76-44cb-af34-823c71e508c9/kube-rbac-proxy/0.log" Jan 30 15:24:00 crc kubenswrapper[4793]: I0130 15:24:00.386277 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/frr/0.log" Jan 30 15:24:00 crc kubenswrapper[4793]: I0130 15:24:00.788743 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g9hvr_519ea47c-0d76-44cb-af34-823c71e508c9/speaker/0.log" Jan 30 15:24:01 crc kubenswrapper[4793]: I0130 15:24:01.398650 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:24:01 crc kubenswrapper[4793]: E0130 15:24:01.399003 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:24:03 crc kubenswrapper[4793]: I0130 15:24:03.210418 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:24:03 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:24:03 crc kubenswrapper[4793]: > Jan 30 15:24:04 crc kubenswrapper[4793]: I0130 15:24:04.929230 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-vsdkv" podUID="fd03c93b-a2a7-4a2f-9292-29c4e7fe9640" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 15:24:12 crc kubenswrapper[4793]: I0130 15:24:12.398681 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:24:12 crc kubenswrapper[4793]: E0130 15:24:12.399333 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:24:13 crc kubenswrapper[4793]: I0130 15:24:13.137550 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:24:13 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:24:13 crc kubenswrapper[4793]: > Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.167273 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/util/0.log" Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.440723 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/util/0.log" Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.512003 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/pull/0.log" Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.533898 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/pull/0.log" Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.803783 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/util/0.log" Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.812304 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/pull/0.log" Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.814425 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/extract/0.log" Jan 30 15:24:17 crc kubenswrapper[4793]: I0130 15:24:17.181263 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/util/0.log" Jan 30 15:24:17 crc kubenswrapper[4793]: I0130 15:24:17.600029 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/pull/0.log" Jan 30 15:24:17 crc kubenswrapper[4793]: I0130 15:24:17.607266 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/util/0.log" Jan 30 15:24:17 crc kubenswrapper[4793]: I0130 15:24:17.621657 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/pull/0.log" Jan 30 15:24:17 crc kubenswrapper[4793]: I0130 15:24:17.929287 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/extract/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.145773 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/pull/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.207925 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/util/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.331745 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-utilities/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.549831 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-content/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.588145 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-utilities/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.633506 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-content/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.763216 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-utilities/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.837133 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-content/0.log" Jan 30 15:24:19 crc kubenswrapper[4793]: I0130 15:24:19.049991 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-utilities/0.log" Jan 30 15:24:19 crc kubenswrapper[4793]: I0130 15:24:19.527656 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-utilities/0.log" Jan 30 15:24:19 crc kubenswrapper[4793]: I0130 15:24:19.580471 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-content/0.log" Jan 30 15:24:19 crc kubenswrapper[4793]: I0130 15:24:19.645207 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-content/0.log" Jan 30 15:24:19 crc kubenswrapper[4793]: I0130 15:24:19.662458 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/registry-server/0.log" Jan 30 15:24:19 crc kubenswrapper[4793]: I0130 15:24:19.845360 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-content/0.log" Jan 30 15:24:20 crc kubenswrapper[4793]: I0130 15:24:20.012033 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zkjbp_5834bf4b-676f-4ece-bcee-28949a7109ca/marketplace-operator/0.log" Jan 30 15:24:20 crc kubenswrapper[4793]: I0130 15:24:20.374134 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-utilities/0.log" Jan 30 15:24:20 crc kubenswrapper[4793]: I0130 15:24:20.601040 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-utilities/0.log" Jan 30 15:24:20 crc kubenswrapper[4793]: I0130 15:24:20.680352 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-utilities/0.log" Jan 30 15:24:20 crc kubenswrapper[4793]: I0130 15:24:20.833095 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-content/0.log" Jan 30 15:24:21 crc kubenswrapper[4793]: I0130 15:24:21.032559 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-content/0.log" Jan 30 15:24:21 crc kubenswrapper[4793]: I0130 15:24:21.243178 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-utilities/0.log" Jan 30 15:24:21 crc kubenswrapper[4793]: I0130 15:24:21.326412 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-content/0.log" Jan 30 15:24:21 crc kubenswrapper[4793]: I0130 15:24:21.707606 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/registry-server/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.002379 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/registry-server/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.005298 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/extract-utilities/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.340771 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/extract-content/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.340856 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/extract-content/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.354939 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/extract-utilities/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.574347 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/extract-content/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.578717 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/registry-server/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.603470 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/extract-utilities/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.689681 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-utilities/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.924796 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-utilities/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.982413 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-content/0.log" Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.013448 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-content/0.log" Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.131897 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:24:23 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:24:23 crc kubenswrapper[4793]: > Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.131988 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.132721 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd"} pod="openshift-marketplace/redhat-operators-nlmdf" containerMessage="Container registry-server failed startup probe, will be restarted" Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.132767 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" containerID="cri-o://9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd" gracePeriod=30 Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.147162 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-utilities/0.log" Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.163084 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-content/0.log" Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.950873 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/registry-server/0.log" Jan 30 15:24:24 crc kubenswrapper[4793]: I0130 15:24:24.399957 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:24:24 crc kubenswrapper[4793]: E0130 15:24:24.400923 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:24:37 crc kubenswrapper[4793]: I0130 15:24:37.398880 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:24:37 crc kubenswrapper[4793]: E0130 15:24:37.399550 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:24:48 crc kubenswrapper[4793]: I0130 15:24:48.398872 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:24:48 crc kubenswrapper[4793]: I0130 15:24:48.763701 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"7166f9d0cce33b612a836c2dfa046b2203b8a1eca0d3b045f83e75288acbdb6e"} Jan 30 15:24:49 crc kubenswrapper[4793]: I0130 15:24:49.775699 4793 generic.go:334] "Generic (PLEG): container finished" podID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerID="9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd" exitCode=0 Jan 30 15:24:49 crc kubenswrapper[4793]: I0130 15:24:49.775784 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerDied","Data":"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd"} Jan 30 15:24:49 crc kubenswrapper[4793]: I0130 15:24:49.776044 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerStarted","Data":"b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0"} Jan 30 15:24:52 crc kubenswrapper[4793]: I0130 15:24:52.079889 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:24:52 crc kubenswrapper[4793]: I0130 15:24:52.080442 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:24:53 crc kubenswrapper[4793]: I0130 15:24:53.163262 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:24:53 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:24:53 crc kubenswrapper[4793]: > Jan 30 15:25:03 crc kubenswrapper[4793]: I0130 15:25:03.145170 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:25:03 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:25:03 crc kubenswrapper[4793]: > Jan 30 15:25:13 crc kubenswrapper[4793]: I0130 15:25:13.121102 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:25:13 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:25:13 crc kubenswrapper[4793]: > Jan 30 15:25:22 crc kubenswrapper[4793]: I0130 15:25:22.127137 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:25:22 crc kubenswrapper[4793]: I0130 15:25:22.189984 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:25:22 crc kubenswrapper[4793]: I0130 15:25:22.369308 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nlmdf"] Jan 30 15:25:24 crc kubenswrapper[4793]: I0130 15:25:24.095122 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" containerID="cri-o://b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0" gracePeriod=2 Jan 30 15:25:24 crc kubenswrapper[4793]: I0130 15:25:24.933813 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.104443 4793 generic.go:334] "Generic (PLEG): container finished" podID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerID="b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0" exitCode=0 Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.104544 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerDied","Data":"b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0"} Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.104854 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerDied","Data":"8372c971f9f6c2985247616cba22145cd94668d2cdaaebf62f2b83a40bacf8bb"} Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.104884 4793 scope.go:117] "RemoveContainer" containerID="b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.104560 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.127111 4793 scope.go:117] "RemoveContainer" containerID="9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.127753 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42qfb\" (UniqueName: \"kubernetes.io/projected/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-kube-api-access-42qfb\") pod \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.127946 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-catalog-content\") pod \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.128017 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-utilities\") pod \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.128871 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-utilities" (OuterVolumeSpecName: "utilities") pod "ba5c5be7-e683-443f-a3b6-7b3507b68aa6" (UID: "ba5c5be7-e683-443f-a3b6-7b3507b68aa6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.134020 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-kube-api-access-42qfb" (OuterVolumeSpecName: "kube-api-access-42qfb") pod "ba5c5be7-e683-443f-a3b6-7b3507b68aa6" (UID: "ba5c5be7-e683-443f-a3b6-7b3507b68aa6"). InnerVolumeSpecName "kube-api-access-42qfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.201720 4793 scope.go:117] "RemoveContainer" containerID="4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.223487 4793 scope.go:117] "RemoveContainer" containerID="96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.230657 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42qfb\" (UniqueName: \"kubernetes.io/projected/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-kube-api-access-42qfb\") on node \"crc\" DevicePath \"\"" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.230698 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.268429 4793 scope.go:117] "RemoveContainer" containerID="b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0" Jan 30 15:25:25 crc kubenswrapper[4793]: E0130 15:25:25.270600 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0\": container with ID starting with b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0 not found: ID does not exist" containerID="b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.270635 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0"} err="failed to get container status \"b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0\": rpc error: code = NotFound desc = could not find container \"b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0\": container with ID starting with b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0 not found: ID does not exist" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.270672 4793 scope.go:117] "RemoveContainer" containerID="9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd" Jan 30 15:25:25 crc kubenswrapper[4793]: E0130 15:25:25.270960 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd\": container with ID starting with 9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd not found: ID does not exist" containerID="9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.271011 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd"} err="failed to get container status \"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd\": rpc error: code = NotFound desc = could not find container \"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd\": container with ID starting with 9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd not found: ID does not exist" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.271032 4793 scope.go:117] "RemoveContainer" containerID="4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f" Jan 30 15:25:25 crc kubenswrapper[4793]: E0130 15:25:25.271318 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f\": container with ID starting with 4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f not found: ID does not exist" containerID="4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.271340 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f"} err="failed to get container status \"4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f\": rpc error: code = NotFound desc = could not find container \"4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f\": container with ID starting with 4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f not found: ID does not exist" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.271354 4793 scope.go:117] "RemoveContainer" containerID="96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997" Jan 30 15:25:25 crc kubenswrapper[4793]: E0130 15:25:25.271616 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997\": container with ID starting with 96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997 not found: ID does not exist" containerID="96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.271700 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997"} err="failed to get container status \"96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997\": rpc error: code = NotFound desc = could not find container \"96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997\": container with ID starting with 96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997 not found: ID does not exist" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.279367 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba5c5be7-e683-443f-a3b6-7b3507b68aa6" (UID: "ba5c5be7-e683-443f-a3b6-7b3507b68aa6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.332423 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.453861 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nlmdf"] Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.465094 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nlmdf"] Jan 30 15:25:26 crc kubenswrapper[4793]: I0130 15:25:26.416908 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" path="/var/lib/kubelet/pods/ba5c5be7-e683-443f-a3b6-7b3507b68aa6/volumes" Jan 30 15:26:01 crc kubenswrapper[4793]: I0130 15:26:01.816799 4793 scope.go:117] "RemoveContainer" containerID="86e00e31965f1b3c0ea7cf7b438eeaa03e0e567fc25ab2389b6dc1be13ddc91b" Jan 30 15:26:59 crc kubenswrapper[4793]: I0130 15:26:59.259723 4793 generic.go:334] "Generic (PLEG): container finished" podID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerID="bcc4bc21a6c12cae1a4c2db58d26bdd2be9a4e12bd23b3f347d467b22b7270a5" exitCode=0 Jan 30 15:26:59 crc kubenswrapper[4793]: I0130 15:26:59.259837 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/must-gather-9zdpz" event={"ID":"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72","Type":"ContainerDied","Data":"bcc4bc21a6c12cae1a4c2db58d26bdd2be9a4e12bd23b3f347d467b22b7270a5"} Jan 30 15:26:59 crc kubenswrapper[4793]: I0130 15:26:59.260923 4793 scope.go:117] "RemoveContainer" containerID="bcc4bc21a6c12cae1a4c2db58d26bdd2be9a4e12bd23b3f347d467b22b7270a5" Jan 30 15:26:59 crc kubenswrapper[4793]: I0130 15:26:59.373901 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5swb7_must-gather-9zdpz_9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72/gather/0.log" Jan 30 15:27:02 crc kubenswrapper[4793]: I0130 15:27:02.016413 4793 scope.go:117] "RemoveContainer" containerID="85e030152ec5fa9dd3b51151a0867969b87294517f632303c2c8686222780d3f" Jan 30 15:27:12 crc kubenswrapper[4793]: I0130 15:27:12.415001 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:27:12 crc kubenswrapper[4793]: I0130 15:27:12.415733 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:27:13 crc kubenswrapper[4793]: I0130 15:27:13.861334 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5swb7/must-gather-9zdpz"] Jan 30 15:27:13 crc kubenswrapper[4793]: I0130 15:27:13.862302 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5swb7/must-gather-9zdpz" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="copy" containerID="cri-o://4941afb1ffe31f3ef59ded56a75fac16d895a4e8c097ba8e151ea8b4f01a6144" gracePeriod=2 Jan 30 15:27:13 crc kubenswrapper[4793]: I0130 15:27:13.871283 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5swb7/must-gather-9zdpz"] Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.431510 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5swb7_must-gather-9zdpz_9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72/copy/0.log" Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.436637 4793 generic.go:334] "Generic (PLEG): container finished" podID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerID="4941afb1ffe31f3ef59ded56a75fac16d895a4e8c097ba8e151ea8b4f01a6144" exitCode=143 Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.636973 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5swb7_must-gather-9zdpz_9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72/copy/0.log" Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.637831 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.745666 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm2xv\" (UniqueName: \"kubernetes.io/projected/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-kube-api-access-gm2xv\") pod \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.745742 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-must-gather-output\") pod \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.762453 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-kube-api-access-gm2xv" (OuterVolumeSpecName: "kube-api-access-gm2xv") pod "9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" (UID: "9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72"). InnerVolumeSpecName "kube-api-access-gm2xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.847920 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm2xv\" (UniqueName: \"kubernetes.io/projected/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-kube-api-access-gm2xv\") on node \"crc\" DevicePath \"\"" Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.931147 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" (UID: "9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.950336 4793 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 15:27:15 crc kubenswrapper[4793]: I0130 15:27:15.446164 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5swb7_must-gather-9zdpz_9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72/copy/0.log" Jan 30 15:27:15 crc kubenswrapper[4793]: I0130 15:27:15.446684 4793 scope.go:117] "RemoveContainer" containerID="4941afb1ffe31f3ef59ded56a75fac16d895a4e8c097ba8e151ea8b4f01a6144" Jan 30 15:27:15 crc kubenswrapper[4793]: I0130 15:27:15.446778 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:27:15 crc kubenswrapper[4793]: I0130 15:27:15.466457 4793 scope.go:117] "RemoveContainer" containerID="bcc4bc21a6c12cae1a4c2db58d26bdd2be9a4e12bd23b3f347d467b22b7270a5" Jan 30 15:27:16 crc kubenswrapper[4793]: I0130 15:27:16.411830 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" path="/var/lib/kubelet/pods/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72/volumes" Jan 30 15:27:42 crc kubenswrapper[4793]: I0130 15:27:42.413434 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:27:42 crc kubenswrapper[4793]: I0130 15:27:42.415306 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:28:12 crc kubenswrapper[4793]: I0130 15:28:12.413317 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:28:12 crc kubenswrapper[4793]: I0130 15:28:12.413946 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:28:12 crc kubenswrapper[4793]: I0130 15:28:12.414006 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 15:28:12 crc kubenswrapper[4793]: I0130 15:28:12.414905 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7166f9d0cce33b612a836c2dfa046b2203b8a1eca0d3b045f83e75288acbdb6e"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 15:28:12 crc kubenswrapper[4793]: I0130 15:28:12.414980 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://7166f9d0cce33b612a836c2dfa046b2203b8a1eca0d3b045f83e75288acbdb6e" gracePeriod=600 Jan 30 15:28:13 crc kubenswrapper[4793]: I0130 15:28:13.022833 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="7166f9d0cce33b612a836c2dfa046b2203b8a1eca0d3b045f83e75288acbdb6e" exitCode=0 Jan 30 15:28:13 crc kubenswrapper[4793]: I0130 15:28:13.022953 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"7166f9d0cce33b612a836c2dfa046b2203b8a1eca0d3b045f83e75288acbdb6e"} Jan 30 15:28:13 crc kubenswrapper[4793]: I0130 15:28:13.023501 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:28:14 crc kubenswrapper[4793]: I0130 15:28:14.036180 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"a1b734ff73ea9573c19a8fab41ab955c2ee3f3e6aa5ff281c71092fb8c35b49b"} Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.245273 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jk5h7"] Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.247342 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.247428 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.247499 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="extract-utilities" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.247847 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="extract-utilities" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.247924 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="copy" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.247985 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="copy" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.248072 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.248128 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.248192 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="extract-utilities" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.248248 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="extract-utilities" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.248311 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.248376 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.248437 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="extract-content" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.248493 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="extract-content" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.248555 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="extract-content" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.248611 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="extract-content" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.248681 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="gather" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.248819 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="gather" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.249068 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.249150 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="copy" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.249219 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.249292 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="gather" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.249365 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.250720 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.261216 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jk5h7"] Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.389343 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6mjj\" (UniqueName: \"kubernetes.io/projected/1f41cf99-6474-4b53-b297-0290b4566657-kube-api-access-h6mjj\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.389424 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-utilities\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.389549 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-catalog-content\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.494494 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6mjj\" (UniqueName: \"kubernetes.io/projected/1f41cf99-6474-4b53-b297-0290b4566657-kube-api-access-h6mjj\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.494609 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-utilities\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.494736 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-catalog-content\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.495340 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-catalog-content\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.495888 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-utilities\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.529122 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6mjj\" (UniqueName: \"kubernetes.io/projected/1f41cf99-6474-4b53-b297-0290b4566657-kube-api-access-h6mjj\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.574289 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:45 crc kubenswrapper[4793]: I0130 15:28:45.231806 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jk5h7"] Jan 30 15:28:46 crc kubenswrapper[4793]: I0130 15:28:46.141641 4793 generic.go:334] "Generic (PLEG): container finished" podID="1f41cf99-6474-4b53-b297-0290b4566657" containerID="5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a" exitCode=0 Jan 30 15:28:46 crc kubenswrapper[4793]: I0130 15:28:46.141734 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerDied","Data":"5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a"} Jan 30 15:28:46 crc kubenswrapper[4793]: I0130 15:28:46.143178 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerStarted","Data":"76acece91eace693a7db849b9c561c197137451e4bc3f1f7ff8fcea4e1b97c9c"} Jan 30 15:28:46 crc kubenswrapper[4793]: I0130 15:28:46.143840 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 15:28:48 crc kubenswrapper[4793]: I0130 15:28:48.161930 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerStarted","Data":"0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f"} Jan 30 15:28:49 crc kubenswrapper[4793]: I0130 15:28:49.174759 4793 generic.go:334] "Generic (PLEG): container finished" podID="1f41cf99-6474-4b53-b297-0290b4566657" containerID="0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f" exitCode=0 Jan 30 15:28:49 crc kubenswrapper[4793]: I0130 15:28:49.174809 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerDied","Data":"0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f"} Jan 30 15:28:51 crc kubenswrapper[4793]: I0130 15:28:51.196807 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerStarted","Data":"4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751"} Jan 30 15:28:51 crc kubenswrapper[4793]: I0130 15:28:51.230732 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jk5h7" podStartSLOduration=3.07804903 podStartE2EDuration="7.230711971s" podCreationTimestamp="2026-01-30 15:28:44 +0000 UTC" firstStartedPulling="2026-01-30 15:28:46.143580015 +0000 UTC m=+6336.844928516" lastFinishedPulling="2026-01-30 15:28:50.296242966 +0000 UTC m=+6340.997591457" observedRunningTime="2026-01-30 15:28:51.220810048 +0000 UTC m=+6341.922158559" watchObservedRunningTime="2026-01-30 15:28:51.230711971 +0000 UTC m=+6341.932060462" Jan 30 15:28:54 crc kubenswrapper[4793]: I0130 15:28:54.575472 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:54 crc kubenswrapper[4793]: I0130 15:28:54.575946 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:54 crc kubenswrapper[4793]: I0130 15:28:54.628644 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:55 crc kubenswrapper[4793]: I0130 15:28:55.280584 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:55 crc kubenswrapper[4793]: I0130 15:28:55.335125 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jk5h7"] Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.250235 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jk5h7" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="registry-server" containerID="cri-o://4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751" gracePeriod=2 Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.698012 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.864154 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-utilities\") pod \"1f41cf99-6474-4b53-b297-0290b4566657\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.864222 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-catalog-content\") pod \"1f41cf99-6474-4b53-b297-0290b4566657\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.864454 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6mjj\" (UniqueName: \"kubernetes.io/projected/1f41cf99-6474-4b53-b297-0290b4566657-kube-api-access-h6mjj\") pod \"1f41cf99-6474-4b53-b297-0290b4566657\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.866450 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-utilities" (OuterVolumeSpecName: "utilities") pod "1f41cf99-6474-4b53-b297-0290b4566657" (UID: "1f41cf99-6474-4b53-b297-0290b4566657"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.878234 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f41cf99-6474-4b53-b297-0290b4566657-kube-api-access-h6mjj" (OuterVolumeSpecName: "kube-api-access-h6mjj") pod "1f41cf99-6474-4b53-b297-0290b4566657" (UID: "1f41cf99-6474-4b53-b297-0290b4566657"). InnerVolumeSpecName "kube-api-access-h6mjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.939588 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f41cf99-6474-4b53-b297-0290b4566657" (UID: "1f41cf99-6474-4b53-b297-0290b4566657"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.967374 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.967657 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.967759 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6mjj\" (UniqueName: \"kubernetes.io/projected/1f41cf99-6474-4b53-b297-0290b4566657-kube-api-access-h6mjj\") on node \"crc\" DevicePath \"\"" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.260869 4793 generic.go:334] "Generic (PLEG): container finished" podID="1f41cf99-6474-4b53-b297-0290b4566657" containerID="4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751" exitCode=0 Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.260925 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.260947 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerDied","Data":"4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751"} Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.262192 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerDied","Data":"76acece91eace693a7db849b9c561c197137451e4bc3f1f7ff8fcea4e1b97c9c"} Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.262225 4793 scope.go:117] "RemoveContainer" containerID="4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.283416 4793 scope.go:117] "RemoveContainer" containerID="0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.312879 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jk5h7"] Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.325957 4793 scope.go:117] "RemoveContainer" containerID="5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.326594 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jk5h7"] Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.372547 4793 scope.go:117] "RemoveContainer" containerID="4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751" Jan 30 15:28:58 crc kubenswrapper[4793]: E0130 15:28:58.373505 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751\": container with ID starting with 4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751 not found: ID does not exist" containerID="4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.373547 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751"} err="failed to get container status \"4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751\": rpc error: code = NotFound desc = could not find container \"4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751\": container with ID starting with 4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751 not found: ID does not exist" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.373575 4793 scope.go:117] "RemoveContainer" containerID="0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f" Jan 30 15:28:58 crc kubenswrapper[4793]: E0130 15:28:58.373866 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f\": container with ID starting with 0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f not found: ID does not exist" containerID="0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.373891 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f"} err="failed to get container status \"0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f\": rpc error: code = NotFound desc = could not find container \"0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f\": container with ID starting with 0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f not found: ID does not exist" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.373909 4793 scope.go:117] "RemoveContainer" containerID="5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a" Jan 30 15:28:58 crc kubenswrapper[4793]: E0130 15:28:58.374250 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a\": container with ID starting with 5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a not found: ID does not exist" containerID="5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.374282 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a"} err="failed to get container status \"5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a\": rpc error: code = NotFound desc = could not find container \"5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a\": container with ID starting with 5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a not found: ID does not exist" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.413618 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f41cf99-6474-4b53-b297-0290b4566657" path="/var/lib/kubelet/pods/1f41cf99-6474-4b53-b297-0290b4566657/volumes" Jan 30 15:29:46 crc kubenswrapper[4793]: E0130 15:29:46.819403 4793 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.421s" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.150489 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm"] Jan 30 15:30:00 crc kubenswrapper[4793]: E0130 15:30:00.151671 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="extract-content" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.151692 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="extract-content" Jan 30 15:30:00 crc kubenswrapper[4793]: E0130 15:30:00.151714 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="extract-utilities" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.151725 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="extract-utilities" Jan 30 15:30:00 crc kubenswrapper[4793]: E0130 15:30:00.151754 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="registry-server" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.151763 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="registry-server" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.151971 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="registry-server" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.152896 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.157653 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.160143 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.160498 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm"] Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.292736 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a203b054-652c-4239-b471-4e7ef7665932-secret-volume\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.293135 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a203b054-652c-4239-b471-4e7ef7665932-config-volume\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.293230 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgnlp\" (UniqueName: \"kubernetes.io/projected/a203b054-652c-4239-b471-4e7ef7665932-kube-api-access-lgnlp\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.394989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a203b054-652c-4239-b471-4e7ef7665932-secret-volume\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.395110 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a203b054-652c-4239-b471-4e7ef7665932-config-volume\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.395202 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgnlp\" (UniqueName: \"kubernetes.io/projected/a203b054-652c-4239-b471-4e7ef7665932-kube-api-access-lgnlp\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.396190 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a203b054-652c-4239-b471-4e7ef7665932-config-volume\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.411934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a203b054-652c-4239-b471-4e7ef7665932-secret-volume\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.486714 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgnlp\" (UniqueName: \"kubernetes.io/projected/a203b054-652c-4239-b471-4e7ef7665932-kube-api-access-lgnlp\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.775120 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:01 crc kubenswrapper[4793]: I0130 15:30:01.261521 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm"] Jan 30 15:30:02 crc kubenswrapper[4793]: I0130 15:30:02.086101 4793 generic.go:334] "Generic (PLEG): container finished" podID="a203b054-652c-4239-b471-4e7ef7665932" containerID="c337e6f14c81285f1bf99ab9b3d3d155367ee3babd91077c625549c27d6b85fe" exitCode=0 Jan 30 15:30:02 crc kubenswrapper[4793]: I0130 15:30:02.086186 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" event={"ID":"a203b054-652c-4239-b471-4e7ef7665932","Type":"ContainerDied","Data":"c337e6f14c81285f1bf99ab9b3d3d155367ee3babd91077c625549c27d6b85fe"} Jan 30 15:30:02 crc kubenswrapper[4793]: I0130 15:30:02.086449 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" event={"ID":"a203b054-652c-4239-b471-4e7ef7665932","Type":"ContainerStarted","Data":"f924ef12280d267664eb1609c2390e7c8fa089afcfea0c00e80a81a0aa9e10e5"} Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.439807 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.564188 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgnlp\" (UniqueName: \"kubernetes.io/projected/a203b054-652c-4239-b471-4e7ef7665932-kube-api-access-lgnlp\") pod \"a203b054-652c-4239-b471-4e7ef7665932\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.564433 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a203b054-652c-4239-b471-4e7ef7665932-secret-volume\") pod \"a203b054-652c-4239-b471-4e7ef7665932\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.564471 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a203b054-652c-4239-b471-4e7ef7665932-config-volume\") pod \"a203b054-652c-4239-b471-4e7ef7665932\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.566597 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a203b054-652c-4239-b471-4e7ef7665932-config-volume" (OuterVolumeSpecName: "config-volume") pod "a203b054-652c-4239-b471-4e7ef7665932" (UID: "a203b054-652c-4239-b471-4e7ef7665932"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.570614 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a203b054-652c-4239-b471-4e7ef7665932-kube-api-access-lgnlp" (OuterVolumeSpecName: "kube-api-access-lgnlp") pod "a203b054-652c-4239-b471-4e7ef7665932" (UID: "a203b054-652c-4239-b471-4e7ef7665932"). InnerVolumeSpecName "kube-api-access-lgnlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.572291 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a203b054-652c-4239-b471-4e7ef7665932-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a203b054-652c-4239-b471-4e7ef7665932" (UID: "a203b054-652c-4239-b471-4e7ef7665932"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.667446 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a203b054-652c-4239-b471-4e7ef7665932-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.667507 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a203b054-652c-4239-b471-4e7ef7665932-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.667532 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgnlp\" (UniqueName: \"kubernetes.io/projected/a203b054-652c-4239-b471-4e7ef7665932-kube-api-access-lgnlp\") on node \"crc\" DevicePath \"\"" Jan 30 15:30:04 crc kubenswrapper[4793]: I0130 15:30:04.103356 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" event={"ID":"a203b054-652c-4239-b471-4e7ef7665932","Type":"ContainerDied","Data":"f924ef12280d267664eb1609c2390e7c8fa089afcfea0c00e80a81a0aa9e10e5"} Jan 30 15:30:04 crc kubenswrapper[4793]: I0130 15:30:04.103653 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f924ef12280d267664eb1609c2390e7c8fa089afcfea0c00e80a81a0aa9e10e5" Jan 30 15:30:04 crc kubenswrapper[4793]: I0130 15:30:04.103441 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:04 crc kubenswrapper[4793]: I0130 15:30:04.525506 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r"] Jan 30 15:30:04 crc kubenswrapper[4793]: I0130 15:30:04.533525 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r"] Jan 30 15:30:06 crc kubenswrapper[4793]: I0130 15:30:06.414094 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c63ff2c-cb24-48c2-9af7-05d299d8b36a" path="/var/lib/kubelet/pods/1c63ff2c-cb24-48c2-9af7-05d299d8b36a/volumes" Jan 30 15:30:42 crc kubenswrapper[4793]: I0130 15:30:42.414130 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:30:42 crc kubenswrapper[4793]: I0130 15:30:42.414773 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:31:02 crc kubenswrapper[4793]: I0130 15:31:02.175732 4793 scope.go:117] "RemoveContainer" containerID="2bb7033c2b6902fe7f3fb960e4da2010748828c26715bef2cd982381fe406b45" Jan 30 15:31:12 crc kubenswrapper[4793]: I0130 15:31:12.414020 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:31:12 crc kubenswrapper[4793]: I0130 15:31:12.414634 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137147334024455 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137147335017373 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137132074016510 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137132074015460 5ustar corecore